2025-09-18 00:00:08.163843 | Job console starting 2025-09-18 00:00:08.176478 | Updating git repos 2025-09-18 00:00:08.343273 | Cloning repos into workspace 2025-09-18 00:00:08.545427 | Restoring repo states 2025-09-18 00:00:08.566209 | Merging changes 2025-09-18 00:00:08.566228 | Checking out repos 2025-09-18 00:00:08.977875 | Preparing playbooks 2025-09-18 00:00:09.806744 | Running Ansible setup 2025-09-18 00:00:16.527467 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-18 00:00:18.349186 | 2025-09-18 00:00:18.349319 | PLAY [Base pre] 2025-09-18 00:00:18.399338 | 2025-09-18 00:00:18.399491 | TASK [Setup log path fact] 2025-09-18 00:00:18.435482 | orchestrator | ok 2025-09-18 00:00:18.472520 | 2025-09-18 00:00:18.472652 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-18 00:00:18.517123 | orchestrator | ok 2025-09-18 00:00:18.531745 | 2025-09-18 00:00:18.531847 | TASK [emit-job-header : Print job information] 2025-09-18 00:00:18.572174 | # Job Information 2025-09-18 00:00:18.572410 | Ansible Version: 2.16.14 2025-09-18 00:00:18.572471 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-18 00:00:18.572519 | Pipeline: periodic-midnight 2025-09-18 00:00:18.572548 | Executor: 521e9411259a 2025-09-18 00:00:18.572569 | Triggered by: https://github.com/osism/testbed 2025-09-18 00:00:18.572591 | Event ID: b16fc079f5ee4f11a1eaa64045b6bf94 2025-09-18 00:00:18.585094 | 2025-09-18 00:00:18.585203 | LOOP [emit-job-header : Print node information] 2025-09-18 00:00:18.791393 | orchestrator | ok: 2025-09-18 00:00:18.791633 | orchestrator | # Node Information 2025-09-18 00:00:18.791670 | orchestrator | Inventory Hostname: orchestrator 2025-09-18 00:00:18.791696 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-18 00:00:18.791717 | orchestrator | Username: zuul-testbed03 2025-09-18 00:00:18.791788 | orchestrator | Distro: Debian 12.12 2025-09-18 00:00:18.791817 | orchestrator | Provider: static-testbed 2025-09-18 00:00:18.791839 | orchestrator | Region: 2025-09-18 00:00:18.791861 | orchestrator | Label: testbed-orchestrator 2025-09-18 00:00:18.791881 | orchestrator | Product Name: OpenStack Nova 2025-09-18 00:00:18.791900 | orchestrator | Interface IP: 81.163.193.140 2025-09-18 00:00:18.812791 | 2025-09-18 00:00:18.812897 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-18 00:00:20.328418 | orchestrator -> localhost | changed 2025-09-18 00:00:20.335071 | 2025-09-18 00:00:20.335168 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-18 00:00:22.489565 | orchestrator -> localhost | changed 2025-09-18 00:00:22.506428 | 2025-09-18 00:00:22.506539 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-18 00:00:23.515200 | orchestrator -> localhost | ok 2025-09-18 00:00:23.528409 | 2025-09-18 00:00:23.528522 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-18 00:00:23.557666 | orchestrator | ok 2025-09-18 00:00:23.590974 | orchestrator | included: /var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-18 00:00:23.612374 | 2025-09-18 00:00:23.612486 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-18 00:00:26.649501 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-18 00:00:26.649685 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/521038748b9a4196ba8a7486b5537499_id_rsa 2025-09-18 00:00:26.649720 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/521038748b9a4196ba8a7486b5537499_id_rsa.pub 2025-09-18 00:00:26.649743 | orchestrator -> localhost | The key fingerprint is: 2025-09-18 00:00:26.649765 | orchestrator -> localhost | SHA256:44bg3RcnEBVPGW2kAznaTLgdj3KgsBMpE0YuzlUafpE zuul-build-sshkey 2025-09-18 00:00:26.649785 | orchestrator -> localhost | The key's randomart image is: 2025-09-18 00:00:26.649813 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-18 00:00:26.649833 | orchestrator -> localhost | | .+o +. .o+oo=. | 2025-09-18 00:00:26.649852 | orchestrator -> localhost | | o+ BE. o.=+..o | 2025-09-18 00:00:26.649871 | orchestrator -> localhost | |. .* = ..O =+. | 2025-09-18 00:00:26.649888 | orchestrator -> localhost | |o.. + . +.* .. | 2025-09-18 00:00:26.649906 | orchestrator -> localhost | | o .. Soo . | 2025-09-18 00:00:26.649927 | orchestrator -> localhost | | . o + . + | 2025-09-18 00:00:26.649946 | orchestrator -> localhost | | . o + . | 2025-09-18 00:00:26.649964 | orchestrator -> localhost | | . . | 2025-09-18 00:00:26.649983 | orchestrator -> localhost | | | 2025-09-18 00:00:26.650001 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-18 00:00:26.650045 | orchestrator -> localhost | ok: Runtime: 0:00:02.091597 2025-09-18 00:00:26.656725 | 2025-09-18 00:00:26.656805 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-18 00:00:26.694173 | orchestrator | ok 2025-09-18 00:00:26.702987 | orchestrator | included: /var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-18 00:00:26.719479 | 2025-09-18 00:00:26.719563 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-18 00:00:26.752629 | orchestrator | skipping: Conditional result was False 2025-09-18 00:00:26.758812 | 2025-09-18 00:00:26.758908 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-18 00:00:27.716618 | orchestrator | changed 2025-09-18 00:00:27.724959 | 2025-09-18 00:00:27.725044 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-18 00:00:27.995463 | orchestrator | ok 2025-09-18 00:00:28.003041 | 2025-09-18 00:00:28.003126 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-18 00:00:28.435059 | orchestrator | ok 2025-09-18 00:00:28.444849 | 2025-09-18 00:00:28.444940 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-18 00:00:28.886292 | orchestrator | ok 2025-09-18 00:00:28.895348 | 2025-09-18 00:00:28.895427 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-18 00:00:28.921141 | orchestrator | skipping: Conditional result was False 2025-09-18 00:00:28.926673 | 2025-09-18 00:00:28.926750 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-18 00:00:30.019601 | orchestrator -> localhost | changed 2025-09-18 00:00:30.031614 | 2025-09-18 00:00:30.031701 | TASK [add-build-sshkey : Add back temp key] 2025-09-18 00:00:30.818135 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/521038748b9a4196ba8a7486b5537499_id_rsa (zuul-build-sshkey) 2025-09-18 00:00:30.818313 | orchestrator -> localhost | ok: Runtime: 0:00:00.036171 2025-09-18 00:00:30.824111 | 2025-09-18 00:00:30.824187 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-18 00:00:31.546303 | orchestrator | ok 2025-09-18 00:00:31.551480 | 2025-09-18 00:00:31.551559 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-18 00:00:31.591595 | orchestrator | skipping: Conditional result was False 2025-09-18 00:00:31.691719 | 2025-09-18 00:00:31.691817 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-18 00:00:32.048006 | orchestrator | ok 2025-09-18 00:00:32.061648 | 2025-09-18 00:00:32.061745 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-18 00:00:32.131034 | orchestrator | ok 2025-09-18 00:00:32.136857 | 2025-09-18 00:00:32.136945 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-18 00:00:32.672088 | orchestrator -> localhost | ok 2025-09-18 00:00:32.677859 | 2025-09-18 00:00:32.677932 | TASK [validate-host : Collect information about the host] 2025-09-18 00:00:34.332171 | orchestrator | ok 2025-09-18 00:00:34.351908 | 2025-09-18 00:00:34.352011 | TASK [validate-host : Sanitize hostname] 2025-09-18 00:00:34.405687 | orchestrator | ok 2025-09-18 00:00:34.409949 | 2025-09-18 00:00:34.410025 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-18 00:00:35.404687 | orchestrator -> localhost | changed 2025-09-18 00:00:35.413524 | 2025-09-18 00:00:35.413615 | TASK [validate-host : Collect information about zuul worker] 2025-09-18 00:00:35.922733 | orchestrator | ok 2025-09-18 00:00:35.927142 | 2025-09-18 00:00:35.927223 | TASK [validate-host : Write out all zuul information for each host] 2025-09-18 00:00:36.797766 | orchestrator -> localhost | changed 2025-09-18 00:00:36.807341 | 2025-09-18 00:00:36.807429 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-18 00:00:37.109493 | orchestrator | ok 2025-09-18 00:00:37.114260 | 2025-09-18 00:00:37.114366 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-18 00:01:16.735647 | orchestrator | changed: 2025-09-18 00:01:16.735939 | orchestrator | .d..t...... src/ 2025-09-18 00:01:16.735990 | orchestrator | .d..t...... src/github.com/ 2025-09-18 00:01:16.736218 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-18 00:01:16.736256 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-18 00:01:16.736288 | orchestrator | RedHat.yml 2025-09-18 00:01:16.754165 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-18 00:01:16.754182 | orchestrator | RedHat.yml 2025-09-18 00:01:16.754234 | orchestrator | = 1.53.0"... 2025-09-18 00:01:27.539979 | orchestrator | 00:01:27.539 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-18 00:01:27.568140 | orchestrator | 00:01:27.567 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-18 00:01:27.722073 | orchestrator | 00:01:27.721 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-18 00:01:28.395569 | orchestrator | 00:01:28.395 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-18 00:01:28.603176 | orchestrator | 00:01:28.603 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-18 00:01:29.255465 | orchestrator | 00:01:29.255 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-18 00:01:29.328318 | orchestrator | 00:01:29.328 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-18 00:01:29.763785 | orchestrator | 00:01:29.763 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-18 00:01:29.763843 | orchestrator | 00:01:29.763 STDOUT terraform: Providers are signed by their developers. 2025-09-18 00:01:29.763850 | orchestrator | 00:01:29.763 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-18 00:01:29.763879 | orchestrator | 00:01:29.763 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-18 00:01:29.763967 | orchestrator | 00:01:29.763 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-18 00:01:29.764004 | orchestrator | 00:01:29.763 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-18 00:01:29.764089 | orchestrator | 00:01:29.763 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-18 00:01:29.764144 | orchestrator | 00:01:29.764 STDOUT terraform: you run "tofu init" in the future. 2025-09-18 00:01:29.764151 | orchestrator | 00:01:29.764 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-18 00:01:29.764192 | orchestrator | 00:01:29.764 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-18 00:01:29.764252 | orchestrator | 00:01:29.764 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-18 00:01:29.764259 | orchestrator | 00:01:29.764 STDOUT terraform: should now work. 2025-09-18 00:01:29.767300 | orchestrator | 00:01:29.764 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-18 00:01:29.767339 | orchestrator | 00:01:29.764 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-18 00:01:29.767345 | orchestrator | 00:01:29.764 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-18 00:01:29.855424 | orchestrator | 00:01:29.855 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-18 00:01:29.855516 | orchestrator | 00:01:29.855 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-18 00:01:30.041237 | orchestrator | 00:01:30.041 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-18 00:01:30.041324 | orchestrator | 00:01:30.041 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-18 00:01:30.041342 | orchestrator | 00:01:30.041 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-18 00:01:30.041351 | orchestrator | 00:01:30.041 STDOUT terraform: for this configuration. 2025-09-18 00:01:30.162366 | orchestrator | 00:01:30.162 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-18 00:01:30.162484 | orchestrator | 00:01:30.162 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-18 00:01:30.265869 | orchestrator | 00:01:30.265 STDOUT terraform: ci.auto.tfvars 2025-09-18 00:01:30.269704 | orchestrator | 00:01:30.269 STDOUT terraform: default_custom.tf 2025-09-18 00:01:30.389950 | orchestrator | 00:01:30.389 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-18 00:01:31.247495 | orchestrator | 00:01:31.247 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-18 00:01:31.745005 | orchestrator | 00:01:31.744 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-18 00:01:31.959162 | orchestrator | 00:01:31.959 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-18 00:01:31.959229 | orchestrator | 00:01:31.959 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-18 00:01:31.959240 | orchestrator | 00:01:31.959 STDOUT terraform:  + create 2025-09-18 00:01:31.959250 | orchestrator | 00:01:31.959 STDOUT terraform:  <= read (data resources) 2025-09-18 00:01:31.959263 | orchestrator | 00:01:31.959 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-18 00:01:31.959301 | orchestrator | 00:01:31.959 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-18 00:01:31.959338 | orchestrator | 00:01:31.959 STDOUT terraform:  # (config refers to values not yet known) 2025-09-18 00:01:31.959372 | orchestrator | 00:01:31.959 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-18 00:01:31.959401 | orchestrator | 00:01:31.959 STDOUT terraform:  + checksum = (known after apply) 2025-09-18 00:01:31.959432 | orchestrator | 00:01:31.959 STDOUT terraform:  + created_at = (known after apply) 2025-09-18 00:01:31.959468 | orchestrator | 00:01:31.959 STDOUT terraform:  + file = (known after apply) 2025-09-18 00:01:31.959497 | orchestrator | 00:01:31.959 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.959524 | orchestrator | 00:01:31.959 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.959553 | orchestrator | 00:01:31.959 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-18 00:01:31.959576 | orchestrator | 00:01:31.959 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-18 00:01:31.959597 | orchestrator | 00:01:31.959 STDOUT terraform:  + most_recent = true 2025-09-18 00:01:31.959624 | orchestrator | 00:01:31.959 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.959658 | orchestrator | 00:01:31.959 STDOUT terraform:  + protected = (known after apply) 2025-09-18 00:01:31.959685 | orchestrator | 00:01:31.959 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.959714 | orchestrator | 00:01:31.959 STDOUT terraform:  + schema = (known after apply) 2025-09-18 00:01:31.959743 | orchestrator | 00:01:31.959 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-18 00:01:31.959772 | orchestrator | 00:01:31.959 STDOUT terraform:  + tags = (known after apply) 2025-09-18 00:01:31.959799 | orchestrator | 00:01:31.959 STDOUT terraform:  + updated_at = (known after apply) 2025-09-18 00:01:31.959810 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-18 00:01:31.959870 | orchestrator | 00:01:31.959 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-18 00:01:31.959897 | orchestrator | 00:01:31.959 STDOUT terraform:  # (config refers to values not yet known) 2025-09-18 00:01:31.959931 | orchestrator | 00:01:31.959 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-18 00:01:31.959959 | orchestrator | 00:01:31.959 STDOUT terraform:  + checksum = (known after apply) 2025-09-18 00:01:31.959987 | orchestrator | 00:01:31.959 STDOUT terraform:  + created_at = (known after apply) 2025-09-18 00:01:31.960024 | orchestrator | 00:01:31.959 STDOUT terraform:  + file = (known after apply) 2025-09-18 00:01:31.960043 | orchestrator | 00:01:31.960 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.960073 | orchestrator | 00:01:31.960 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.960104 | orchestrator | 00:01:31.960 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-18 00:01:31.960136 | orchestrator | 00:01:31.960 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-18 00:01:31.960155 | orchestrator | 00:01:31.960 STDOUT terraform:  + most_recent = true 2025-09-18 00:01:31.960185 | orchestrator | 00:01:31.960 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.960212 | orchestrator | 00:01:31.960 STDOUT terraform:  + protected = (known after apply) 2025-09-18 00:01:31.960239 | orchestrator | 00:01:31.960 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.960267 | orchestrator | 00:01:31.960 STDOUT terraform:  + schema = (known after apply) 2025-09-18 00:01:31.960294 | orchestrator | 00:01:31.960 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-18 00:01:31.960323 | orchestrator | 00:01:31.960 STDOUT terraform:  + tags = (known after apply) 2025-09-18 00:01:31.960357 | orchestrator | 00:01:31.960 STDOUT terraform:  + updated_at = (known after apply) 2025-09-18 00:01:31.960371 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-18 00:01:31.960400 | orchestrator | 00:01:31.960 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-18 00:01:31.960429 | orchestrator | 00:01:31.960 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-18 00:01:31.960459 | orchestrator | 00:01:31.960 STDOUT terraform:  + content = (known after apply) 2025-09-18 00:01:31.960493 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 00:01:31.960527 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 00:01:31.960560 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 00:01:31.960593 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 00:01:31.960630 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 00:01:31.960666 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 00:01:31.960679 | orchestrator | 00:01:31.960 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 00:01:31.960717 | orchestrator | 00:01:31.960 STDOUT terraform:  + file_permission = "0644" 2025-09-18 00:01:31.960753 | orchestrator | 00:01:31.960 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-18 00:01:31.960785 | orchestrator | 00:01:31.960 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.960794 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-18 00:01:31.960873 | orchestrator | 00:01:31.960 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-18 00:01:31.960889 | orchestrator | 00:01:31.960 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-18 00:01:31.960901 | orchestrator | 00:01:31.960 STDOUT terraform:  + content = (known after apply) 2025-09-18 00:01:31.960942 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 00:01:31.960982 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 00:01:31.961015 | orchestrator | 00:01:31.960 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 00:01:31.961052 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 00:01:31.961089 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 00:01:31.961125 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 00:01:31.961134 | orchestrator | 00:01:31.961 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 00:01:31.961164 | orchestrator | 00:01:31.961 STDOUT terraform:  + file_permission = "0644" 2025-09-18 00:01:31.961193 | orchestrator | 00:01:31.961 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-18 00:01:31.961232 | orchestrator | 00:01:31.961 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.961242 | orchestrator | 00:01:31.961 STDOUT terraform:  } 2025-09-18 00:01:31.961304 | orchestrator | 00:01:31.961 STDOUT terraform:  # local_file.inventory will be created 2025-09-18 00:01:31.961327 | orchestrator | 00:01:31.961 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-18 00:01:31.961362 | orchestrator | 00:01:31.961 STDOUT terraform:  + content = (known after apply) 2025-09-18 00:01:31.961398 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 00:01:31.961430 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 00:01:31.961467 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 00:01:31.961502 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 00:01:31.961535 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 00:01:31.961570 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 00:01:31.961596 | orchestrator | 00:01:31.961 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 00:01:31.961618 | orchestrator | 00:01:31.961 STDOUT terraform:  + file_permission = "0644" 2025-09-18 00:01:31.961649 | orchestrator | 00:01:31.961 STDOUT terraform:  + filename = "inventory.ci" 2025-09-18 00:01:31.961683 | orchestrator | 00:01:31.961 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.961693 | orchestrator | 00:01:31.961 STDOUT terraform:  } 2025-09-18 00:01:31.961721 | orchestrator | 00:01:31.961 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-18 00:01:31.961749 | orchestrator | 00:01:31.961 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-18 00:01:31.961781 | orchestrator | 00:01:31.961 STDOUT terraform:  + content = (sensitive value) 2025-09-18 00:01:31.961836 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 00:01:31.961857 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 00:01:31.961892 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 00:01:31.961925 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 00:01:31.961959 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 00:01:31.961993 | orchestrator | 00:01:31.961 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 00:01:31.962031 | orchestrator | 00:01:31.961 STDOUT terraform:  + directory_permission = "0700" 2025-09-18 00:01:31.962056 | orchestrator | 00:01:31.962 STDOUT terraform:  + file_permission = "0600" 2025-09-18 00:01:31.962087 | orchestrator | 00:01:31.962 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-18 00:01:31.962127 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.962137 | orchestrator | 00:01:31.962 STDOUT terraform:  } 2025-09-18 00:01:31.962159 | orchestrator | 00:01:31.962 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-18 00:01:31.962189 | orchestrator | 00:01:31.962 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-18 00:01:31.962211 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.962222 | orchestrator | 00:01:31.962 STDOUT terraform:  } 2025-09-18 00:01:31.962267 | orchestrator | 00:01:31.962 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-18 00:01:31.962312 | orchestrator | 00:01:31.962 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-18 00:01:31.962344 | orchestrator | 00:01:31.962 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.962368 | orchestrator | 00:01:31.962 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.962403 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.962436 | orchestrator | 00:01:31.962 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.962471 | orchestrator | 00:01:31.962 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.962518 | orchestrator | 00:01:31.962 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-18 00:01:31.962553 | orchestrator | 00:01:31.962 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.962573 | orchestrator | 00:01:31.962 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.962597 | orchestrator | 00:01:31.962 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.962622 | orchestrator | 00:01:31.962 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.962631 | orchestrator | 00:01:31.962 STDOUT terraform:  } 2025-09-18 00:01:31.962678 | orchestrator | 00:01:31.962 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-18 00:01:31.962722 | orchestrator | 00:01:31.962 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.962756 | orchestrator | 00:01:31.962 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.962778 | orchestrator | 00:01:31.962 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.962814 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.962867 | orchestrator | 00:01:31.962 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.962876 | orchestrator | 00:01:31.962 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.962924 | orchestrator | 00:01:31.962 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-18 00:01:31.962959 | orchestrator | 00:01:31.962 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.962979 | orchestrator | 00:01:31.962 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.963002 | orchestrator | 00:01:31.962 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.963026 | orchestrator | 00:01:31.962 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.963035 | orchestrator | 00:01:31.963 STDOUT terraform:  } 2025-09-18 00:01:31.963078 | orchestrator | 00:01:31.963 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-18 00:01:31.963122 | orchestrator | 00:01:31.963 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.963155 | orchestrator | 00:01:31.963 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.963174 | orchestrator | 00:01:31.963 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.963209 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.963244 | orchestrator | 00:01:31.963 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.963277 | orchestrator | 00:01:31.963 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.963326 | orchestrator | 00:01:31.963 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-18 00:01:31.963353 | orchestrator | 00:01:31.963 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.963374 | orchestrator | 00:01:31.963 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.963397 | orchestrator | 00:01:31.963 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.963419 | orchestrator | 00:01:31.963 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.963428 | orchestrator | 00:01:31.963 STDOUT terraform:  } 2025-09-18 00:01:31.963474 | orchestrator | 00:01:31.963 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-18 00:01:31.963517 | orchestrator | 00:01:31.963 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.963553 | orchestrator | 00:01:31.963 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.963577 | orchestrator | 00:01:31.963 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.963611 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.963644 | orchestrator | 00:01:31.963 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.963678 | orchestrator | 00:01:31.963 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.963721 | orchestrator | 00:01:31.963 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-18 00:01:31.963755 | orchestrator | 00:01:31.963 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.963776 | orchestrator | 00:01:31.963 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.963799 | orchestrator | 00:01:31.963 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.963841 | orchestrator | 00:01:31.963 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.963848 | orchestrator | 00:01:31.963 STDOUT terraform:  } 2025-09-18 00:01:31.963883 | orchestrator | 00:01:31.963 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-18 00:01:31.963928 | orchestrator | 00:01:31.963 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.963964 | orchestrator | 00:01:31.963 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.963987 | orchestrator | 00:01:31.963 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.964022 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.964056 | orchestrator | 00:01:31.964 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.964089 | orchestrator | 00:01:31.964 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.964132 | orchestrator | 00:01:31.964 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-18 00:01:31.964165 | orchestrator | 00:01:31.964 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.964186 | orchestrator | 00:01:31.964 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.964209 | orchestrator | 00:01:31.964 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.964232 | orchestrator | 00:01:31.964 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.964240 | orchestrator | 00:01:31.964 STDOUT terraform:  } 2025-09-18 00:01:31.964286 | orchestrator | 00:01:31.964 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-18 00:01:31.964331 | orchestrator | 00:01:31.964 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.964364 | orchestrator | 00:01:31.964 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.964388 | orchestrator | 00:01:31.964 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.964424 | orchestrator | 00:01:31.964 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.964458 | orchestrator | 00:01:31.964 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.964493 | orchestrator | 00:01:31.964 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.964536 | orchestrator | 00:01:31.964 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-18 00:01:31.964571 | orchestrator | 00:01:31.964 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.964591 | orchestrator | 00:01:31.964 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.964617 | orchestrator | 00:01:31.964 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.964640 | orchestrator | 00:01:31.964 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.964649 | orchestrator | 00:01:31.964 STDOUT terraform:  } 2025-09-18 00:01:31.964695 | orchestrator | 00:01:31.964 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-18 00:01:31.964738 | orchestrator | 00:01:31.964 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 00:01:31.964773 | orchestrator | 00:01:31.964 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.964796 | orchestrator | 00:01:31.964 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.964852 | orchestrator | 00:01:31.964 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.964879 | orchestrator | 00:01:31.964 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.964913 | orchestrator | 00:01:31.964 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.964956 | orchestrator | 00:01:31.964 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-18 00:01:31.964990 | orchestrator | 00:01:31.964 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.965011 | orchestrator | 00:01:31.964 STDOUT terraform:  + size = 80 2025-09-18 00:01:31.965026 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.965055 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.965064 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-18 00:01:31.965109 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-18 00:01:31.965151 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.965184 | orchestrator | 00:01:31.965 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.965208 | orchestrator | 00:01:31.965 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.965244 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.965277 | orchestrator | 00:01:31.965 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.965317 | orchestrator | 00:01:31.965 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-18 00:01:31.965352 | orchestrator | 00:01:31.965 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.965372 | orchestrator | 00:01:31.965 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.965396 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.965419 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.965428 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-18 00:01:31.965474 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-18 00:01:31.965516 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.965549 | orchestrator | 00:01:31.965 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.965572 | orchestrator | 00:01:31.965 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.965606 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.965641 | orchestrator | 00:01:31.965 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.965678 | orchestrator | 00:01:31.965 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-18 00:01:31.965714 | orchestrator | 00:01:31.965 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.965727 | orchestrator | 00:01:31.965 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.965761 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.965772 | orchestrator | 00:01:31.965 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.965781 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-18 00:01:31.965845 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-18 00:01:31.965874 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.965909 | orchestrator | 00:01:31.965 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.965927 | orchestrator | 00:01:31.965 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.965962 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.965995 | orchestrator | 00:01:31.965 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.966047 | orchestrator | 00:01:31.965 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-18 00:01:31.966082 | orchestrator | 00:01:31.966 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.966100 | orchestrator | 00:01:31.966 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.966124 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.966150 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.966159 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-18 00:01:31.966201 | orchestrator | 00:01:31.966 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-18 00:01:31.966243 | orchestrator | 00:01:31.966 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.966277 | orchestrator | 00:01:31.966 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.966299 | orchestrator | 00:01:31.966 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.966334 | orchestrator | 00:01:31.966 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.966368 | orchestrator | 00:01:31.966 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.966407 | orchestrator | 00:01:31.966 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-18 00:01:31.966443 | orchestrator | 00:01:31.966 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.966463 | orchestrator | 00:01:31.966 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.966486 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.966508 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.966518 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-18 00:01:31.966562 | orchestrator | 00:01:31.966 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-18 00:01:31.966604 | orchestrator | 00:01:31.966 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.966637 | orchestrator | 00:01:31.966 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.966661 | orchestrator | 00:01:31.966 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.966697 | orchestrator | 00:01:31.966 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.966730 | orchestrator | 00:01:31.966 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.966778 | orchestrator | 00:01:31.966 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-18 00:01:31.966804 | orchestrator | 00:01:31.966 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.966863 | orchestrator | 00:01:31.966 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.966872 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.966881 | orchestrator | 00:01:31.966 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.966888 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-18 00:01:31.967244 | orchestrator | 00:01:31.966 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-18 00:01:31.967256 | orchestrator | 00:01:31.966 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.967263 | orchestrator | 00:01:31.966 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.967270 | orchestrator | 00:01:31.966 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.967278 | orchestrator | 00:01:31.967 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.967285 | orchestrator | 00:01:31.967 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.967293 | orchestrator | 00:01:31.967 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-18 00:01:31.967300 | orchestrator | 00:01:31.967 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.967307 | orchestrator | 00:01:31.967 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.967314 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.967321 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.967328 | orchestrator | 00:01:31.967 STDOUT terraform:  } 2025-09-18 00:01:31.967335 | orchestrator | 00:01:31.967 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-18 00:01:31.967345 | orchestrator | 00:01:31.967 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.967353 | orchestrator | 00:01:31.967 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.967360 | orchestrator | 00:01:31.967 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.967367 | orchestrator | 00:01:31.967 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.967377 | orchestrator | 00:01:31.967 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.967415 | orchestrator | 00:01:31.967 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-18 00:01:31.967449 | orchestrator | 00:01:31.967 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.967459 | orchestrator | 00:01:31.967 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.967488 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.970060 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.970094 | orchestrator | 00:01:31.967 STDOUT terraform:  } 2025-09-18 00:01:31.970101 | orchestrator | 00:01:31.967 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-18 00:01:31.970108 | orchestrator | 00:01:31.967 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.970125 | orchestrator | 00:01:31.967 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.970132 | orchestrator | 00:01:31.967 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.970138 | orchestrator | 00:01:31.967 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.970144 | orchestrator | 00:01:31.967 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.970150 | orchestrator | 00:01:31.967 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-18 00:01:31.970157 | orchestrator | 00:01:31.967 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.970168 | orchestrator | 00:01:31.967 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.970175 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.970181 | orchestrator | 00:01:31.967 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.970187 | orchestrator | 00:01:31.967 STDOUT terraform:  } 2025-09-18 00:01:31.970193 | orchestrator | 00:01:31.967 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-18 00:01:31.970200 | orchestrator | 00:01:31.967 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 00:01:31.970206 | orchestrator | 00:01:31.967 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 00:01:31.970213 | orchestrator | 00:01:31.967 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.970219 | orchestrator | 00:01:31.967 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.970225 | orchestrator | 00:01:31.967 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 00:01:31.970231 | orchestrator | 00:01:31.968 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-18 00:01:31.970238 | orchestrator | 00:01:31.968 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.970245 | orchestrator | 00:01:31.968 STDOUT terraform:  + size = 20 2025-09-18 00:01:31.970251 | orchestrator | 00:01:31.968 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 00:01:31.970258 | orchestrator | 00:01:31.968 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 00:01:31.970264 | orchestrator | 00:01:31.968 STDOUT terraform:  } 2025-09-18 00:01:31.970270 | orchestrator | 00:01:31.968 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-18 00:01:31.970277 | orchestrator | 00:01:31.968 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-18 00:01:31.970284 | orchestrator | 00:01:31.968 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.970290 | orchestrator | 00:01:31.968 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.970296 | orchestrator | 00:01:31.968 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.970303 | orchestrator | 00:01:31.968 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.970310 | orchestrator | 00:01:31.968 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.970322 | orchestrator | 00:01:31.968 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.970328 | orchestrator | 00:01:31.968 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.970335 | orchestrator | 00:01:31.968 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.970341 | orchestrator | 00:01:31.968 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-18 00:01:31.970360 | orchestrator | 00:01:31.968 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.970367 | orchestrator | 00:01:31.968 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.970374 | orchestrator | 00:01:31.968 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.970380 | orchestrator | 00:01:31.968 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.970386 | orchestrator | 00:01:31.968 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.970393 | orchestrator | 00:01:31.968 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.970399 | orchestrator | 00:01:31.968 STDOUT terraform:  + name = "testbed-manager" 2025-09-18 00:01:31.970405 | orchestrator | 00:01:31.968 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.970410 | orchestrator | 00:01:31.968 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.970417 | orchestrator | 00:01:31.968 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.970423 | orchestrator | 00:01:31.968 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.970429 | orchestrator | 00:01:31.968 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.970435 | orchestrator | 00:01:31.968 STDOUT terraform:  + user_data = (sensitive value) 2025-09-18 00:01:31.970441 | orchestrator | 00:01:31.968 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.970447 | orchestrator | 00:01:31.968 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.970453 | orchestrator | 00:01:31.968 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.970462 | orchestrator | 00:01:31.968 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.970469 | orchestrator | 00:01:31.968 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.970475 | orchestrator | 00:01:31.968 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.970481 | orchestrator | 00:01:31.968 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.970488 | orchestrator | 00:01:31.968 STDOUT terraform:  } 2025-09-18 00:01:31.970495 | orchestrator | 00:01:31.968 STDOUT terraform:  + network { 2025-09-18 00:01:31.970502 | orchestrator | 00:01:31.968 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.970508 | orchestrator | 00:01:31.968 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.970517 | orchestrator | 00:01:31.969 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.970523 | orchestrator | 00:01:31.969 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.970535 | orchestrator | 00:01:31.969 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.970542 | orchestrator | 00:01:31.969 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.970548 | orchestrator | 00:01:31.969 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.970555 | orchestrator | 00:01:31.969 STDOUT terraform:  } 2025-09-18 00:01:31.970562 | orchestrator | 00:01:31.969 STDOUT terraform:  } 2025-09-18 00:01:31.970569 | orchestrator | 00:01:31.969 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-18 00:01:31.970575 | orchestrator | 00:01:31.969 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.970582 | orchestrator | 00:01:31.969 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.970588 | orchestrator | 00:01:31.969 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.970594 | orchestrator | 00:01:31.969 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.970600 | orchestrator | 00:01:31.969 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.970606 | orchestrator | 00:01:31.969 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.970617 | orchestrator | 00:01:31.969 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.970624 | orchestrator | 00:01:31.969 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.970630 | orchestrator | 00:01:31.969 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.970635 | orchestrator | 00:01:31.969 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.970641 | orchestrator | 00:01:31.969 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.970648 | orchestrator | 00:01:31.969 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.970655 | orchestrator | 00:01:31.969 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.970661 | orchestrator | 00:01:31.969 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.970668 | orchestrator | 00:01:31.969 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.970674 | orchestrator | 00:01:31.969 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.970680 | orchestrator | 00:01:31.969 STDOUT terraform:  + name = "testbed-node-0" 2025-09-18 00:01:31.970686 | orchestrator | 00:01:31.969 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.970692 | orchestrator | 00:01:31.969 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.970699 | orchestrator | 00:01:31.969 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.970705 | orchestrator | 00:01:31.969 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.970712 | orchestrator | 00:01:31.969 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.970719 | orchestrator | 00:01:31.969 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.970725 | orchestrator | 00:01:31.969 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.970736 | orchestrator | 00:01:31.969 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.970742 | orchestrator | 00:01:31.969 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.970748 | orchestrator | 00:01:31.969 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.970755 | orchestrator | 00:01:31.969 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.970762 | orchestrator | 00:01:31.969 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.970768 | orchestrator | 00:01:31.969 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.970774 | orchestrator | 00:01:31.969 STDOUT terraform:  } 2025-09-18 00:01:31.970780 | orchestrator | 00:01:31.969 STDOUT terraform:  + network { 2025-09-18 00:01:31.970787 | orchestrator | 00:01:31.969 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.970798 | orchestrator | 00:01:31.969 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.970805 | orchestrator | 00:01:31.970 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.970812 | orchestrator | 00:01:31.970 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.970959 | orchestrator | 00:01:31.970 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.971008 | orchestrator | 00:01:31.970 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.971017 | orchestrator | 00:01:31.970 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.971025 | orchestrator | 00:01:31.970 STDOUT terraform:  } 2025-09-18 00:01:31.971032 | orchestrator | 00:01:31.970 STDOUT terraform:  } 2025-09-18 00:01:31.971040 | orchestrator | 00:01:31.970 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-18 00:01:31.971047 | orchestrator | 00:01:31.970 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.971053 | orchestrator | 00:01:31.970 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.971077 | orchestrator | 00:01:31.970 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.971084 | orchestrator | 00:01:31.970 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.971090 | orchestrator | 00:01:31.970 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.971097 | orchestrator | 00:01:31.970 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.971103 | orchestrator | 00:01:31.970 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.971109 | orchestrator | 00:01:31.970 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.971115 | orchestrator | 00:01:31.970 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.971122 | orchestrator | 00:01:31.970 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.971128 | orchestrator | 00:01:31.970 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.971134 | orchestrator | 00:01:31.970 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.971156 | orchestrator | 00:01:31.970 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.971162 | orchestrator | 00:01:31.970 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.971168 | orchestrator | 00:01:31.970 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.971175 | orchestrator | 00:01:31.970 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.971181 | orchestrator | 00:01:31.970 STDOUT terraform:  + name = "testbed-node-1" 2025-09-18 00:01:31.971187 | orchestrator | 00:01:31.970 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.971193 | orchestrator | 00:01:31.970 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.971199 | orchestrator | 00:01:31.970 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.971216 | orchestrator | 00:01:31.970 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.971223 | orchestrator | 00:01:31.970 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.971233 | orchestrator | 00:01:31.971 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.971240 | orchestrator | 00:01:31.971 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.971246 | orchestrator | 00:01:31.971 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.971252 | orchestrator | 00:01:31.971 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.971258 | orchestrator | 00:01:31.971 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.971265 | orchestrator | 00:01:31.971 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.971271 | orchestrator | 00:01:31.971 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.971280 | orchestrator | 00:01:31.971 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.971286 | orchestrator | 00:01:31.971 STDOUT terraform:  } 2025-09-18 00:01:31.971295 | orchestrator | 00:01:31.971 STDOUT terraform:  + network { 2025-09-18 00:01:31.971332 | orchestrator | 00:01:31.971 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.971357 | orchestrator | 00:01:31.971 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.971398 | orchestrator | 00:01:31.971 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.971429 | orchestrator | 00:01:31.971 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.971459 | orchestrator | 00:01:31.971 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.971504 | orchestrator | 00:01:31.971 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.971533 | orchestrator | 00:01:31.971 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.971560 | orchestrator | 00:01:31.971 STDOUT terraform:  } 2025-09-18 00:01:31.971571 | orchestrator | 00:01:31.971 STDOUT terraform:  } 2025-09-18 00:01:31.971612 | orchestrator | 00:01:31.971 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-18 00:01:31.971665 | orchestrator | 00:01:31.971 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.971711 | orchestrator | 00:01:31.971 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.971745 | orchestrator | 00:01:31.971 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.971793 | orchestrator | 00:01:31.971 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.971854 | orchestrator | 00:01:31.971 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.971866 | orchestrator | 00:01:31.971 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.971889 | orchestrator | 00:01:31.971 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.971924 | orchestrator | 00:01:31.971 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.971971 | orchestrator | 00:01:31.971 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.971998 | orchestrator | 00:01:31.971 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.972036 | orchestrator | 00:01:31.971 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.972069 | orchestrator | 00:01:31.972 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.972116 | orchestrator | 00:01:31.972 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.972150 | orchestrator | 00:01:31.972 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.972197 | orchestrator | 00:01:31.972 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.972222 | orchestrator | 00:01:31.972 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.972265 | orchestrator | 00:01:31.972 STDOUT terraform:  + name = "testbed-node-2" 2025-09-18 00:01:31.972287 | orchestrator | 00:01:31.972 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.972336 | orchestrator | 00:01:31.972 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.972370 | orchestrator | 00:01:31.972 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.972406 | orchestrator | 00:01:31.972 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.972443 | orchestrator | 00:01:31.972 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.972501 | orchestrator | 00:01:31.972 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.972511 | orchestrator | 00:01:31.972 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.972539 | orchestrator | 00:01:31.972 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.972580 | orchestrator | 00:01:31.972 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.972607 | orchestrator | 00:01:31.972 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.972649 | orchestrator | 00:01:31.972 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.972676 | orchestrator | 00:01:31.972 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.972728 | orchestrator | 00:01:31.972 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.972744 | orchestrator | 00:01:31.972 STDOUT terraform:  } 2025-09-18 00:01:31.972751 | orchestrator | 00:01:31.972 STDOUT terraform:  + network { 2025-09-18 00:01:31.972759 | orchestrator | 00:01:31.972 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.972805 | orchestrator | 00:01:31.972 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.972843 | orchestrator | 00:01:31.972 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.972887 | orchestrator | 00:01:31.972 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.972917 | orchestrator | 00:01:31.972 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.972961 | orchestrator | 00:01:31.972 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.972993 | orchestrator | 00:01:31.972 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.973003 | orchestrator | 00:01:31.972 STDOUT terraform:  } 2025-09-18 00:01:31.973029 | orchestrator | 00:01:31.973 STDOUT terraform:  } 2025-09-18 00:01:31.973070 | orchestrator | 00:01:31.973 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-18 00:01:31.973125 | orchestrator | 00:01:31.973 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.976370 | orchestrator | 00:01:31.973 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.976411 | orchestrator | 00:01:31.973 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.977373 | orchestrator | 00:01:31.973 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.977392 | orchestrator | 00:01:31.976 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.977399 | orchestrator | 00:01:31.976 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.977406 | orchestrator | 00:01:31.976 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.977412 | orchestrator | 00:01:31.976 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.977418 | orchestrator | 00:01:31.977 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.977425 | orchestrator | 00:01:31.977 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.977431 | orchestrator | 00:01:31.977 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.977437 | orchestrator | 00:01:31.977 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.977443 | orchestrator | 00:01:31.977 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.977450 | orchestrator | 00:01:31.977 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.977456 | orchestrator | 00:01:31.977 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.977462 | orchestrator | 00:01:31.977 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.977468 | orchestrator | 00:01:31.977 STDOUT terraform:  + name = "testbed-node-3" 2025-09-18 00:01:31.977474 | orchestrator | 00:01:31.977 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.977498 | orchestrator | 00:01:31.977 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.977504 | orchestrator | 00:01:31.977 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.977510 | orchestrator | 00:01:31.977 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.977519 | orchestrator | 00:01:31.977 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.977526 | orchestrator | 00:01:31.977 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.977541 | orchestrator | 00:01:31.977 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.977548 | orchestrator | 00:01:31.977 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.977554 | orchestrator | 00:01:31.977 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.977563 | orchestrator | 00:01:31.977 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.977569 | orchestrator | 00:01:31.977 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.977577 | orchestrator | 00:01:31.977 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.977708 | orchestrator | 00:01:31.977 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.977718 | orchestrator | 00:01:31.977 STDOUT terraform:  } 2025-09-18 00:01:31.977724 | orchestrator | 00:01:31.977 STDOUT terraform:  + network { 2025-09-18 00:01:31.977730 | orchestrator | 00:01:31.977 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.977736 | orchestrator | 00:01:31.977 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.977742 | orchestrator | 00:01:31.977 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.977765 | orchestrator | 00:01:31.977 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.977772 | orchestrator | 00:01:31.977 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.977780 | orchestrator | 00:01:31.977 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.977987 | orchestrator | 00:01:31.977 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.977996 | orchestrator | 00:01:31.977 STDOUT terraform:  } 2025-09-18 00:01:31.978002 | orchestrator | 00:01:31.977 STDOUT terraform:  } 2025-09-18 00:01:31.978007 | orchestrator | 00:01:31.977 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-18 00:01:31.978033 | orchestrator | 00:01:31.977 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.978040 | orchestrator | 00:01:31.977 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.978046 | orchestrator | 00:01:31.977 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.978054 | orchestrator | 00:01:31.977 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.978280 | orchestrator | 00:01:31.977 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.978288 | orchestrator | 00:01:31.978 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.978300 | orchestrator | 00:01:31.978 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.978306 | orchestrator | 00:01:31.978 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.978311 | orchestrator | 00:01:31.978 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.978316 | orchestrator | 00:01:31.978 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.978322 | orchestrator | 00:01:31.978 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.978328 | orchestrator | 00:01:31.978 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.978335 | orchestrator | 00:01:31.978 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.978341 | orchestrator | 00:01:31.978 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.978365 | orchestrator | 00:01:31.978 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.978389 | orchestrator | 00:01:31.978 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.978420 | orchestrator | 00:01:31.978 STDOUT terraform:  + name = "testbed-node-4" 2025-09-18 00:01:31.978445 | orchestrator | 00:01:31.978 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.978480 | orchestrator | 00:01:31.978 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.982034 | orchestrator | 00:01:31.978 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.982045 | orchestrator | 00:01:31.978 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.982051 | orchestrator | 00:01:31.978 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.982056 | orchestrator | 00:01:31.978 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.982086 | orchestrator | 00:01:31.978 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.982123 | orchestrator | 00:01:31.982 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.982150 | orchestrator | 00:01:31.982 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.982181 | orchestrator | 00:01:31.982 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.982212 | orchestrator | 00:01:31.982 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.982228 | orchestrator | 00:01:31.982 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.982312 | orchestrator | 00:01:31.982 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.982321 | orchestrator | 00:01:31.982 STDOUT terraform:  } 2025-09-18 00:01:31.982329 | orchestrator | 00:01:31.982 STDOUT terraform:  + network { 2025-09-18 00:01:31.982334 | orchestrator | 00:01:31.982 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.982367 | orchestrator | 00:01:31.982 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.982395 | orchestrator | 00:01:31.982 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.982426 | orchestrator | 00:01:31.982 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.982454 | orchestrator | 00:01:31.982 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.982484 | orchestrator | 00:01:31.982 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.982514 | orchestrator | 00:01:31.982 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.982522 | orchestrator | 00:01:31.982 STDOUT terraform:  } 2025-09-18 00:01:31.982528 | orchestrator | 00:01:31.982 STDOUT terraform:  } 2025-09-18 00:01:31.982586 | orchestrator | 00:01:31.982 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-18 00:01:31.982618 | orchestrator | 00:01:31.982 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 00:01:31.982652 | orchestrator | 00:01:31.982 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 00:01:31.982685 | orchestrator | 00:01:31.982 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 00:01:31.982719 | orchestrator | 00:01:31.982 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 00:01:31.982752 | orchestrator | 00:01:31.982 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.982772 | orchestrator | 00:01:31.982 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 00:01:31.982780 | orchestrator | 00:01:31.982 STDOUT terraform:  + config_drive = true 2025-09-18 00:01:31.982830 | orchestrator | 00:01:31.982 STDOUT terraform:  + created = (known after apply) 2025-09-18 00:01:31.982864 | orchestrator | 00:01:31.982 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 00:01:31.982898 | orchestrator | 00:01:31.982 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 00:01:31.982913 | orchestrator | 00:01:31.982 STDOUT terraform:  + force_delete = false 2025-09-18 00:01:31.986035 | orchestrator | 00:01:31.982 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 00:01:31.986044 | orchestrator | 00:01:31.982 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986048 | orchestrator | 00:01:31.982 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 00:01:31.986053 | orchestrator | 00:01:31.982 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 00:01:31.986058 | orchestrator | 00:01:31.983 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 00:01:31.986066 | orchestrator | 00:01:31.983 STDOUT terraform:  + name = "testbed-node-5" 2025-09-18 00:01:31.986071 | orchestrator | 00:01:31.983 STDOUT terraform:  + power_state = "active" 2025-09-18 00:01:31.986076 | orchestrator | 00:01:31.983 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986080 | orchestrator | 00:01:31.983 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 00:01:31.986085 | orchestrator | 00:01:31.983 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 00:01:31.986089 | orchestrator | 00:01:31.983 STDOUT terraform:  + updated = (known after apply) 2025-09-18 00:01:31.986094 | orchestrator | 00:01:31.983 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 00:01:31.986099 | orchestrator | 00:01:31.983 STDOUT terraform:  + block_device { 2025-09-18 00:01:31.986109 | orchestrator | 00:01:31.983 STDOUT terraform:  + boot_index = 0 2025-09-18 00:01:31.986114 | orchestrator | 00:01:31.983 STDOUT terraform:  + delete_on_termination = false 2025-09-18 00:01:31.986118 | orchestrator | 00:01:31.983 STDOUT terraform:  + destination_type = "volume" 2025-09-18 00:01:31.986123 | orchestrator | 00:01:31.983 STDOUT terraform:  + multiattach = false 2025-09-18 00:01:31.986127 | orchestrator | 00:01:31.983 STDOUT terraform:  + source_type = "volume" 2025-09-18 00:01:31.986132 | orchestrator | 00:01:31.983 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.986137 | orchestrator | 00:01:31.983 STDOUT terraform:  } 2025-09-18 00:01:31.986141 | orchestrator | 00:01:31.983 STDOUT terraform:  + network { 2025-09-18 00:01:31.986146 | orchestrator | 00:01:31.983 STDOUT terraform:  + access_network = false 2025-09-18 00:01:31.986151 | orchestrator | 00:01:31.983 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 00:01:31.986155 | orchestrator | 00:01:31.983 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 00:01:31.986160 | orchestrator | 00:01:31.983 STDOUT terraform:  + mac = (known after apply) 2025-09-18 00:01:31.986164 | orchestrator | 00:01:31.983 STDOUT terraform:  + name = (known after apply) 2025-09-18 00:01:31.986169 | orchestrator | 00:01:31.983 STDOUT terraform:  + port = (known after apply) 2025-09-18 00:01:31.986173 | orchestrator | 00:01:31.983 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 00:01:31.986178 | orchestrator | 00:01:31.983 STDOUT terraform:  } 2025-09-18 00:01:31.986183 | orchestrator | 00:01:31.983 STDOUT terraform:  } 2025-09-18 00:01:31.986187 | orchestrator | 00:01:31.983 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-18 00:01:31.986192 | orchestrator | 00:01:31.983 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-18 00:01:31.986197 | orchestrator | 00:01:31.983 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-18 00:01:31.986201 | orchestrator | 00:01:31.983 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986206 | orchestrator | 00:01:31.983 STDOUT terraform:  + name = "testbed" 2025-09-18 00:01:31.986211 | orchestrator | 00:01:31.983 STDOUT terraform:  + private_key = (sensitive value) 2025-09-18 00:01:31.986215 | orchestrator | 00:01:31.983 STDOUT terraform:  + public_key = (known after apply) 2025-09-18 00:01:31.986220 | orchestrator | 00:01:31.983 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986232 | orchestrator | 00:01:31.983 STDOUT terraform:  + user_id = (known after apply) 2025-09-18 00:01:31.986237 | orchestrator | 00:01:31.983 STDOUT terraform:  } 2025-09-18 00:01:31.986241 | orchestrator | 00:01:31.983 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-18 00:01:31.986247 | orchestrator | 00:01:31.983 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986251 | orchestrator | 00:01:31.983 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986256 | orchestrator | 00:01:31.983 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986264 | orchestrator | 00:01:31.983 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986268 | orchestrator | 00:01:31.983 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986273 | orchestrator | 00:01:31.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986278 | orchestrator | 00:01:31.984 STDOUT terraform:  } 2025-09-18 00:01:31.986282 | orchestrator | 00:01:31.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-18 00:01:31.986287 | orchestrator | 00:01:31.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986291 | orchestrator | 00:01:31.984 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986296 | orchestrator | 00:01:31.984 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986301 | orchestrator | 00:01:31.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986305 | orchestrator | 00:01:31.984 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986310 | orchestrator | 00:01:31.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986314 | orchestrator | 00:01:31.984 STDOUT terraform:  } 2025-09-18 00:01:31.986319 | orchestrator | 00:01:31.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-18 00:01:31.986324 | orchestrator | 00:01:31.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986328 | orchestrator | 00:01:31.984 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986333 | orchestrator | 00:01:31.984 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986337 | orchestrator | 00:01:31.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986342 | orchestrator | 00:01:31.984 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986346 | orchestrator | 00:01:31.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986351 | orchestrator | 00:01:31.984 STDOUT terraform:  } 2025-09-18 00:01:31.986356 | orchestrator | 00:01:31.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-18 00:01:31.986360 | orchestrator | 00:01:31.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986365 | orchestrator | 00:01:31.984 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986369 | orchestrator | 00:01:31.984 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986374 | orchestrator | 00:01:31.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986379 | orchestrator | 00:01:31.984 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986383 | orchestrator | 00:01:31.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986388 | orchestrator | 00:01:31.984 STDOUT terraform:  } 2025-09-18 00:01:31.986392 | orchestrator | 00:01:31.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-18 00:01:31.986400 | orchestrator | 00:01:31.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986405 | orchestrator | 00:01:31.985 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986418 | orchestrator | 00:01:31.985 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986423 | orchestrator | 00:01:31.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986427 | orchestrator | 00:01:31.985 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986432 | orchestrator | 00:01:31.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986436 | orchestrator | 00:01:31.985 STDOUT terraform:  } 2025-09-18 00:01:31.986441 | orchestrator | 00:01:31.985 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-18 00:01:31.986446 | orchestrator | 00:01:31.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986450 | orchestrator | 00:01:31.985 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986455 | orchestrator | 00:01:31.985 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986459 | orchestrator | 00:01:31.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986464 | orchestrator | 00:01:31.985 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986469 | orchestrator | 00:01:31.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986473 | orchestrator | 00:01:31.985 STDOUT terraform:  } 2025-09-18 00:01:31.986478 | orchestrator | 00:01:31.985 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-18 00:01:31.986482 | orchestrator | 00:01:31.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986487 | orchestrator | 00:01:31.985 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986492 | orchestrator | 00:01:31.985 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986496 | orchestrator | 00:01:31.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986501 | orchestrator | 00:01:31.985 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986505 | orchestrator | 00:01:31.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986510 | orchestrator | 00:01:31.985 STDOUT terraform:  } 2025-09-18 00:01:31.986514 | orchestrator | 00:01:31.985 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-18 00:01:31.986519 | orchestrator | 00:01:31.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986524 | orchestrator | 00:01:31.985 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986528 | orchestrator | 00:01:31.985 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986533 | orchestrator | 00:01:31.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986537 | orchestrator | 00:01:31.985 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986545 | orchestrator | 00:01:31.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986549 | orchestrator | 00:01:31.985 STDOUT terraform:  } 2025-09-18 00:01:31.986554 | orchestrator | 00:01:31.985 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-18 00:01:31.986559 | orchestrator | 00:01:31.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 00:01:31.986563 | orchestrator | 00:01:31.985 STDOUT terraform:  + device = (known after apply) 2025-09-18 00:01:31.986568 | orchestrator | 00:01:31.985 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986572 | orchestrator | 00:01:31.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 00:01:31.986577 | orchestrator | 00:01:31.985 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986581 | orchestrator | 00:01:31.986 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 00:01:31.986586 | orchestrator | 00:01:31.986 STDOUT terraform:  } 2025-09-18 00:01:31.986596 | orchestrator | 00:01:31.986 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-18 00:01:31.986601 | orchestrator | 00:01:31.986 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-18 00:01:31.986606 | orchestrator | 00:01:31.986 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-18 00:01:31.986611 | orchestrator | 00:01:31.986 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-18 00:01:31.986615 | orchestrator | 00:01:31.986 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986620 | orchestrator | 00:01:31.986 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 00:01:31.986624 | orchestrator | 00:01:31.986 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986629 | orchestrator | 00:01:31.986 STDOUT terraform:  } 2025-09-18 00:01:31.986634 | orchestrator | 00:01:31.986 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-18 00:01:31.986638 | orchestrator | 00:01:31.986 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-18 00:01:31.986643 | orchestrator | 00:01:31.986 STDOUT terraform:  + address = (known after apply) 2025-09-18 00:01:31.986648 | orchestrator | 00:01:31.986 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.986652 | orchestrator | 00:01:31.986 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-18 00:01:31.986657 | orchestrator | 00:01:31.986 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.986662 | orchestrator | 00:01:31.986 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-18 00:01:31.986666 | orchestrator | 00:01:31.986 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.986671 | orchestrator | 00:01:31.986 STDOUT terraform:  + pool = "public" 2025-09-18 00:01:31.986676 | orchestrator | 00:01:31.986 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 00:01:31.986680 | orchestrator | 00:01:31.986 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.986687 | orchestrator | 00:01:31.986 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.986695 | orchestrator | 00:01:31.986 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.986700 | orchestrator | 00:01:31.986 STDOUT terraform:  } 2025-09-18 00:01:31.986705 | orchestrator | 00:01:31.986 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-18 00:01:31.986711 | orchestrator | 00:01:31.986 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-18 00:01:31.986744 | orchestrator | 00:01:31.986 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.986780 | orchestrator | 00:01:31.986 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.986808 | orchestrator | 00:01:31.986 STDOUT terraform:  + availability_zone_hints = [ 2025-09-18 00:01:31.986815 | orchestrator | 00:01:31.986 STDOUT terraform:  + "nova", 2025-09-18 00:01:31.986861 | orchestrator | 00:01:31.986 STDOUT terraform:  ] 2025-09-18 00:01:31.986895 | orchestrator | 00:01:31.986 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-18 00:01:31.986928 | orchestrator | 00:01:31.986 STDOUT terraform:  + external = (known after apply) 2025-09-18 00:01:31.986965 | orchestrator | 00:01:31.986 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.987002 | orchestrator | 00:01:31.986 STDOUT terraform:  + mtu = (known after apply) 2025-09-18 00:01:31.987040 | orchestrator | 00:01:31.986 STDOUT terraform:  + name = "net-testbed-management" 2025-09-18 00:01:31.987075 | orchestrator | 00:01:31.987 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.987112 | orchestrator | 00:01:31.987 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.987148 | orchestrator | 00:01:31.987 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.987185 | orchestrator | 00:01:31.987 STDOUT terraform:  + shared = (known after apply) 2025-09-18 00:01:31.987220 | orchestrator | 00:01:31.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.987256 | orchestrator | 00:01:31.987 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-18 00:01:31.987280 | orchestrator | 00:01:31.987 STDOUT terraform:  + segments (known after apply) 2025-09-18 00:01:31.987287 | orchestrator | 00:01:31.987 STDOUT terraform:  } 2025-09-18 00:01:31.987336 | orchestrator | 00:01:31.987 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-18 00:01:31.987381 | orchestrator | 00:01:31.987 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-18 00:01:31.987417 | orchestrator | 00:01:31.987 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.987453 | orchestrator | 00:01:31.987 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.987487 | orchestrator | 00:01:31.987 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.987524 | orchestrator | 00:01:31.987 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.987565 | orchestrator | 00:01:31.987 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.987594 | orchestrator | 00:01:31.987 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.987630 | orchestrator | 00:01:31.987 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.987665 | orchestrator | 00:01:31.987 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.987700 | orchestrator | 00:01:31.987 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.987735 | orchestrator | 00:01:31.987 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.987771 | orchestrator | 00:01:31.987 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.987805 | orchestrator | 00:01:31.987 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.987868 | orchestrator | 00:01:31.987 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.987888 | orchestrator | 00:01:31.987 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.987925 | orchestrator | 00:01:31.987 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.987960 | orchestrator | 00:01:31.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.987986 | orchestrator | 00:01:31.987 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.988015 | orchestrator | 00:01:31.987 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.988023 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988044 | orchestrator | 00:01:31.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.988071 | orchestrator | 00:01:31.988 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.988078 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988104 | orchestrator | 00:01:31.988 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.988111 | orchestrator | 00:01:31.988 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.988138 | orchestrator | 00:01:31.988 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-18 00:01:31.988166 | orchestrator | 00:01:31.988 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.988173 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988188 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988234 | orchestrator | 00:01:31.988 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-18 00:01:31.988278 | orchestrator | 00:01:31.988 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.988318 | orchestrator | 00:01:31.988 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.988348 | orchestrator | 00:01:31.988 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.988381 | orchestrator | 00:01:31.988 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.988417 | orchestrator | 00:01:31.988 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.988452 | orchestrator | 00:01:31.988 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.988487 | orchestrator | 00:01:31.988 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.988522 | orchestrator | 00:01:31.988 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.988557 | orchestrator | 00:01:31.988 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.988593 | orchestrator | 00:01:31.988 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.988628 | orchestrator | 00:01:31.988 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.988662 | orchestrator | 00:01:31.988 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.988697 | orchestrator | 00:01:31.988 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.988732 | orchestrator | 00:01:31.988 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.988767 | orchestrator | 00:01:31.988 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.988801 | orchestrator | 00:01:31.988 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.988859 | orchestrator | 00:01:31.988 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.988878 | orchestrator | 00:01:31.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.988905 | orchestrator | 00:01:31.988 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.988913 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988932 | orchestrator | 00:01:31.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.988961 | orchestrator | 00:01:31.988 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.988968 | orchestrator | 00:01:31.988 STDOUT terraform:  } 2025-09-18 00:01:31.988990 | orchestrator | 00:01:31.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.989017 | orchestrator | 00:01:31.988 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.989024 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.989047 | orchestrator | 00:01:31.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.989075 | orchestrator | 00:01:31.989 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.989082 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.989106 | orchestrator | 00:01:31.989 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.989112 | orchestrator | 00:01:31.989 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.989140 | orchestrator | 00:01:31.989 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-18 00:01:31.989169 | orchestrator | 00:01:31.989 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.989176 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.989191 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.989238 | orchestrator | 00:01:31.989 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-18 00:01:31.989282 | orchestrator | 00:01:31.989 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.989319 | orchestrator | 00:01:31.989 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.989354 | orchestrator | 00:01:31.989 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.989388 | orchestrator | 00:01:31.989 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.989423 | orchestrator | 00:01:31.989 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.989458 | orchestrator | 00:01:31.989 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.989493 | orchestrator | 00:01:31.989 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.989528 | orchestrator | 00:01:31.989 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.989563 | orchestrator | 00:01:31.989 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.989600 | orchestrator | 00:01:31.989 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.989636 | orchestrator | 00:01:31.989 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.989671 | orchestrator | 00:01:31.989 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.989705 | orchestrator | 00:01:31.989 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.989739 | orchestrator | 00:01:31.989 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.989775 | orchestrator | 00:01:31.989 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.989809 | orchestrator | 00:01:31.989 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.989863 | orchestrator | 00:01:31.989 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.989883 | orchestrator | 00:01:31.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.989910 | orchestrator | 00:01:31.989 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.989917 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.989947 | orchestrator | 00:01:31.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.989976 | orchestrator | 00:01:31.989 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.989982 | orchestrator | 00:01:31.989 STDOUT terraform:  } 2025-09-18 00:01:31.990028 | orchestrator | 00:01:31.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.991834 | orchestrator | 00:01:31.990 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.991868 | orchestrator | 00:01:31.991 STDOUT terraform:  } 2025-09-18 00:01:31.991872 | orchestrator | 00:01:31.991 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.991912 | orchestrator | 00:01:31.991 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.991928 | orchestrator | 00:01:31.991 STDOUT terraform:  } 2025-09-18 00:01:31.991964 | orchestrator | 00:01:31.991 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.991979 | orchestrator | 00:01:31.991 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.992014 | orchestrator | 00:01:31.991 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-18 00:01:31.992043 | orchestrator | 00:01:31.992 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.992050 | orchestrator | 00:01:31.992 STDOUT terraform:  } 2025-09-18 00:01:31.992067 | orchestrator | 00:01:31.992 STDOUT terraform:  } 2025-09-18 00:01:31.992115 | orchestrator | 00:01:31.992 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-18 00:01:31.992164 | orchestrator | 00:01:31.992 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.992200 | orchestrator | 00:01:31.992 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.992240 | orchestrator | 00:01:31.992 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.992279 | orchestrator | 00:01:31.992 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.992310 | orchestrator | 00:01:31.992 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.992354 | orchestrator | 00:01:31.992 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.992394 | orchestrator | 00:01:31.992 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.992435 | orchestrator | 00:01:31.992 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.992460 | orchestrator | 00:01:31.992 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.992505 | orchestrator | 00:01:31.992 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.992537 | orchestrator | 00:01:31.992 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.992569 | orchestrator | 00:01:31.992 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.992602 | orchestrator | 00:01:31.992 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.992642 | orchestrator | 00:01:31.992 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.992685 | orchestrator | 00:01:31.992 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.992716 | orchestrator | 00:01:31.992 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.992754 | orchestrator | 00:01:31.992 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.992773 | orchestrator | 00:01:31.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.992802 | orchestrator | 00:01:31.992 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.992809 | orchestrator | 00:01:31.992 STDOUT terraform:  } 2025-09-18 00:01:31.992846 | orchestrator | 00:01:31.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.992887 | orchestrator | 00:01:31.992 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.992894 | orchestrator | 00:01:31.992 STDOUT terraform:  } 2025-09-18 00:01:31.992919 | orchestrator | 00:01:31.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.992940 | orchestrator | 00:01:31.992 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.992953 | orchestrator | 00:01:31.992 STDOUT terraform:  } 2025-09-18 00:01:31.992979 | orchestrator | 00:01:31.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.993006 | orchestrator | 00:01:31.992 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.993012 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.993045 | orchestrator | 00:01:31.993 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.993052 | orchestrator | 00:01:31.993 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.993077 | orchestrator | 00:01:31.993 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-18 00:01:31.993105 | orchestrator | 00:01:31.993 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.993112 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.993117 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.993166 | orchestrator | 00:01:31.993 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-18 00:01:31.993210 | orchestrator | 00:01:31.993 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.993248 | orchestrator | 00:01:31.993 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.993296 | orchestrator | 00:01:31.993 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.993325 | orchestrator | 00:01:31.993 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.993367 | orchestrator | 00:01:31.993 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.993418 | orchestrator | 00:01:31.993 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.994117 | orchestrator | 00:01:31.993 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.994128 | orchestrator | 00:01:31.993 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.994132 | orchestrator | 00:01:31.993 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.994141 | orchestrator | 00:01:31.993 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.994145 | orchestrator | 00:01:31.993 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.994150 | orchestrator | 00:01:31.993 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.994154 | orchestrator | 00:01:31.993 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.994159 | orchestrator | 00:01:31.993 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.994163 | orchestrator | 00:01:31.993 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.994167 | orchestrator | 00:01:31.993 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.994172 | orchestrator | 00:01:31.993 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.994176 | orchestrator | 00:01:31.993 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.994181 | orchestrator | 00:01:31.993 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.994191 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.994196 | orchestrator | 00:01:31.993 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.994201 | orchestrator | 00:01:31.993 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.994205 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.994209 | orchestrator | 00:01:31.993 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.994214 | orchestrator | 00:01:31.993 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.994218 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.994223 | orchestrator | 00:01:31.993 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.994227 | orchestrator | 00:01:31.993 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.994231 | orchestrator | 00:01:31.993 STDOUT terraform:  } 2025-09-18 00:01:31.994236 | orchestrator | 00:01:31.993 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.994240 | orchestrator | 00:01:31.993 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.994245 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-18 00:01:31.994250 | orchestrator | 00:01:31.994 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.994254 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.994258 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.994265 | orchestrator | 00:01:31.994 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-18 00:01:31.994270 | orchestrator | 00:01:31.994 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.994275 | orchestrator | 00:01:31.994 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.994280 | orchestrator | 00:01:31.994 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.994284 | orchestrator | 00:01:31.994 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.997787 | orchestrator | 00:01:31.994 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.997793 | orchestrator | 00:01:31.994 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.997797 | orchestrator | 00:01:31.994 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.997801 | orchestrator | 00:01:31.994 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.997805 | orchestrator | 00:01:31.994 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.997809 | orchestrator | 00:01:31.994 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.997812 | orchestrator | 00:01:31.994 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.997826 | orchestrator | 00:01:31.994 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.997830 | orchestrator | 00:01:31.994 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.997838 | orchestrator | 00:01:31.994 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.997842 | orchestrator | 00:01:31.994 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.997846 | orchestrator | 00:01:31.994 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.997850 | orchestrator | 00:01:31.994 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.997854 | orchestrator | 00:01:31.994 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.997858 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.997862 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.997866 | orchestrator | 00:01:31.994 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.997870 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.997873 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.997877 | orchestrator | 00:01:31.994 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.997881 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.997885 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.997889 | orchestrator | 00:01:31.994 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.997892 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.997896 | orchestrator | 00:01:31.994 STDOUT terraform:  } 2025-09-18 00:01:31.997900 | orchestrator | 00:01:31.994 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.997904 | orchestrator | 00:01:31.994 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.997908 | orchestrator | 00:01:31.994 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-18 00:01:31.997911 | orchestrator | 00:01:31.994 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.997915 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.997919 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.997923 | orchestrator | 00:01:31.995 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-18 00:01:31.997927 | orchestrator | 00:01:31.995 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 00:01:31.997931 | orchestrator | 00:01:31.995 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.997935 | orchestrator | 00:01:31.995 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 00:01:31.997938 | orchestrator | 00:01:31.995 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 00:01:31.997942 | orchestrator | 00:01:31.995 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.997946 | orchestrator | 00:01:31.995 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 00:01:31.997954 | orchestrator | 00:01:31.995 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 00:01:31.997958 | orchestrator | 00:01:31.995 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 00:01:31.997968 | orchestrator | 00:01:31.995 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 00:01:31.997972 | orchestrator | 00:01:31.995 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.997976 | orchestrator | 00:01:31.995 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 00:01:31.997980 | orchestrator | 00:01:31.995 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:31.997984 | orchestrator | 00:01:31.995 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 00:01:31.997988 | orchestrator | 00:01:31.995 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 00:01:31.997993 | orchestrator | 00:01:31.995 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.997997 | orchestrator | 00:01:31.995 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 00:01:31.998001 | orchestrator | 00:01:31.995 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.998005 | orchestrator | 00:01:31.995 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.998009 | orchestrator | 00:01:31.995 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 00:01:31.998025 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998029 | orchestrator | 00:01:31.995 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.998033 | orchestrator | 00:01:31.995 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 00:01:31.998037 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998041 | orchestrator | 00:01:31.995 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.998044 | orchestrator | 00:01:31.995 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 00:01:31.998048 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998052 | orchestrator | 00:01:31.995 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 00:01:31.998056 | orchestrator | 00:01:31.995 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 00:01:31.998060 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998063 | orchestrator | 00:01:31.995 STDOUT terraform:  + binding (known after apply) 2025-09-18 00:01:31.998067 | orchestrator | 00:01:31.995 STDOUT terraform:  + fixed_ip { 2025-09-18 00:01:31.998071 | orchestrator | 00:01:31.995 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-18 00:01:31.998075 | orchestrator | 00:01:31.995 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.998078 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998082 | orchestrator | 00:01:31.995 STDOUT terraform:  } 2025-09-18 00:01:31.998086 | orchestrator | 00:01:31.995 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-18 00:01:31.998090 | orchestrator | 00:01:31.996 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-18 00:01:31.998094 | orchestrator | 00:01:31.996 STDOUT terraform:  + force_destroy = false 2025-09-18 00:01:31.998098 | orchestrator | 00:01:31.996 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998107 | orchestrator | 00:01:31.996 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 00:01:31.998111 | orchestrator | 00:01:31.996 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998115 | orchestrator | 00:01:31.996 STDOUT terraform:  + router_id = (known after apply) 2025-09-18 00:01:31.998118 | orchestrator | 00:01:31.996 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 00:01:31.998122 | orchestrator | 00:01:31.996 STDOUT terraform:  } 2025-09-18 00:01:31.998126 | orchestrator | 00:01:31.996 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-18 00:01:31.998133 | orchestrator | 00:01:31.996 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-18 00:01:31.998137 | orchestrator | 00:01:31.996 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 00:01:31.998141 | orchestrator | 00:01:31.996 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:31.998145 | orchestrator | 00:01:31.996 STDOUT terraform:  + availability_zone_hints = [ 2025-09-18 00:01:31.998150 | orchestrator | 00:01:31.996 STDOUT terraform:  + "nova", 2025-09-18 00:01:31.998154 | orchestrator | 00:01:31.996 STDOUT terraform:  ] 2025-09-18 00:01:31.998158 | orchestrator | 00:01:31.996 STDOUT terraform:  + distributed = (known after apply) 2025-09-18 00:01:31.998161 | orchestrator | 00:01:31.996 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-18 00:01:31.998165 | orchestrator | 00:01:31.996 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-18 00:01:31.998171 | orchestrator | 00:01:31.996 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-18 00:01:31.998175 | orchestrator | 00:01:31.996 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998179 | orchestrator | 00:01:31.996 STDOUT terraform:  + name = "testbed" 2025-09-18 00:01:31.998183 | orchestrator | 00:01:31.996 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998187 | orchestrator | 00:01:31.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.998190 | orchestrator | 00:01:31.996 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-18 00:01:31.998194 | orchestrator | 00:01:31.996 STDOUT terraform:  } 2025-09-18 00:01:31.998198 | orchestrator | 00:01:31.996 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-18 00:01:31.998203 | orchestrator | 00:01:31.996 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-18 00:01:31.998206 | orchestrator | 00:01:31.996 STDOUT terraform:  + description = "ssh" 2025-09-18 00:01:31.998210 | orchestrator | 00:01:31.996 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:31.998214 | orchestrator | 00:01:31.996 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:31.998218 | orchestrator | 00:01:31.996 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998221 | orchestrator | 00:01:31.996 STDOUT terraform:  + port_range_max = 22 2025-09-18 00:01:31.998225 | orchestrator | 00:01:31.996 STDOUT terraform:  + port_range_min = 22 2025-09-18 00:01:31.998233 | orchestrator | 00:01:31.996 STDOUT terraform:  + protocol = "tcp" 2025-09-18 00:01:31.998236 | orchestrator | 00:01:31.996 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998240 | orchestrator | 00:01:31.996 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:31.998244 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:31.998248 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:31.998252 | orchestrator | 00:01:31.997 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:31.998255 | orchestrator | 00:01:31.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.998259 | orchestrator | 00:01:31.997 STDOUT terraform:  } 2025-09-18 00:01:31.998263 | orchestrator | 00:01:31.997 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-18 00:01:31.998267 | orchestrator | 00:01:31.997 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-18 00:01:31.998271 | orchestrator | 00:01:31.997 STDOUT terraform:  + description = "wireguard" 2025-09-18 00:01:31.998274 | orchestrator | 00:01:31.997 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:31.998278 | orchestrator | 00:01:31.997 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:31.998285 | orchestrator | 00:01:31.997 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998289 | orchestrator | 00:01:31.997 STDOUT terraform:  + port_range_max = 51820 2025-09-18 00:01:31.998292 | orchestrator | 00:01:31.997 STDOUT terraform:  + port_range_min = 51820 2025-09-18 00:01:31.998296 | orchestrator | 00:01:31.997 STDOUT terraform:  + protocol = "udp" 2025-09-18 00:01:31.998300 | orchestrator | 00:01:31.997 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998304 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:31.998307 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:31.998311 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:31.998317 | orchestrator | 00:01:31.997 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:31.998321 | orchestrator | 00:01:31.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.998325 | orchestrator | 00:01:31.997 STDOUT terraform:  } 2025-09-18 00:01:31.998329 | orchestrator | 00:01:31.997 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-18 00:01:31.998332 | orchestrator | 00:01:31.997 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-18 00:01:31.998336 | orchestrator | 00:01:31.997 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:31.998340 | orchestrator | 00:01:31.997 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:31.998347 | orchestrator | 00:01:31.997 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998351 | orchestrator | 00:01:31.997 STDOUT terraform:  + protocol = "tcp" 2025-09-18 00:01:31.998355 | orchestrator | 00:01:31.997 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998358 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:31.998362 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:31.998366 | orchestrator | 00:01:31.997 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-18 00:01:31.998370 | orchestrator | 00:01:31.997 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:31.998373 | orchestrator | 00:01:31.998 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:31.998377 | orchestrator | 00:01:31.998 STDOUT terraform:  } 2025-09-18 00:01:31.998381 | orchestrator | 00:01:31.998 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-18 00:01:31.998385 | orchestrator | 00:01:31.998 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-18 00:01:31.998388 | orchestrator | 00:01:31.998 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:31.998392 | orchestrator | 00:01:31.998 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:31.998396 | orchestrator | 00:01:31.998 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:31.998400 | orchestrator | 00:01:31.998 STDOUT terraform:  + protocol = "udp" 2025-09-18 00:01:31.998405 | orchestrator | 00:01:31.998 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:31.998409 | orchestrator | 00:01:31.998 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:31.998413 | orchestrator | 00:01:31.998 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:31.998416 | orchestrator | 00:01:31.998 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-18 00:01:31.999131 | orchestrator | 00:01:31.998 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.001248 | orchestrator | 00:01:31.998 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.001276 | orchestrator | 00:01:32.001 STDOUT terraform:  } 2025-09-18 00:01:32.001285 | orchestrator | 00:01:32.001 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-18 00:01:32.001344 | orchestrator | 00:01:32.001 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-18 00:01:32.001351 | orchestrator | 00:01:32.001 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:32.001396 | orchestrator | 00:01:32.001 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:32.001403 | orchestrator | 00:01:32.001 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.001454 | orchestrator | 00:01:32.001 STDOUT terraform:  + protocol = "icmp" 2025-09-18 00:01:32.001482 | orchestrator | 00:01:32.001 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.001528 | orchestrator | 00:01:32.001 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:32.001538 | orchestrator | 00:01:32.001 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:32.001597 | orchestrator | 00:01:32.001 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:32.001649 | orchestrator | 00:01:32.001 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.001656 | orchestrator | 00:01:32.001 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.001662 | orchestrator | 00:01:32.001 STDOUT terraform:  } 2025-09-18 00:01:32.001723 | orchestrator | 00:01:32.001 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-18 00:01:32.001777 | orchestrator | 00:01:32.001 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-18 00:01:32.001905 | orchestrator | 00:01:32.001 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:32.001917 | orchestrator | 00:01:32.001 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:32.001921 | orchestrator | 00:01:32.001 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.001925 | orchestrator | 00:01:32.001 STDOUT terraform:  + protocol = "tcp" 2025-09-18 00:01:32.001946 | orchestrator | 00:01:32.001 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.001993 | orchestrator | 00:01:32.001 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:32.002041 | orchestrator | 00:01:32.001 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:32.002075 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:32.002110 | orchestrator | 00:01:32.002 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.002150 | orchestrator | 00:01:32.002 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.002156 | orchestrator | 00:01:32.002 STDOUT terraform:  } 2025-09-18 00:01:32.002210 | orchestrator | 00:01:32.002 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-18 00:01:32.002261 | orchestrator | 00:01:32.002 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-18 00:01:32.002297 | orchestrator | 00:01:32.002 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:32.002325 | orchestrator | 00:01:32.002 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:32.002376 | orchestrator | 00:01:32.002 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.002406 | orchestrator | 00:01:32.002 STDOUT terraform:  + protocol = "udp" 2025-09-18 00:01:32.002443 | orchestrator | 00:01:32.002 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.002481 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:32.002518 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:32.002557 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:32.002609 | orchestrator | 00:01:32.002 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.002649 | orchestrator | 00:01:32.002 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.002655 | orchestrator | 00:01:32.002 STDOUT terraform:  } 2025-09-18 00:01:32.002701 | orchestrator | 00:01:32.002 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-18 00:01:32.002746 | orchestrator | 00:01:32.002 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-18 00:01:32.002790 | orchestrator | 00:01:32.002 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:32.002801 | orchestrator | 00:01:32.002 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:32.002855 | orchestrator | 00:01:32.002 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.002862 | orchestrator | 00:01:32.002 STDOUT terraform:  + protocol = "icmp" 2025-09-18 00:01:32.002906 | orchestrator | 00:01:32.002 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.002949 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:32.002984 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:32.003017 | orchestrator | 00:01:32.002 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:32.003053 | orchestrator | 00:01:32.003 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.003091 | orchestrator | 00:01:32.003 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.003097 | orchestrator | 00:01:32.003 STDOUT terraform:  } 2025-09-18 00:01:32.003153 | orchestrator | 00:01:32.003 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-18 00:01:32.003213 | orchestrator | 00:01:32.003 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-18 00:01:32.003230 | orchestrator | 00:01:32.003 STDOUT terraform:  + description = "vrrp" 2025-09-18 00:01:32.003267 | orchestrator | 00:01:32.003 STDOUT terraform:  + direction = "ingress" 2025-09-18 00:01:32.003289 | orchestrator | 00:01:32.003 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 00:01:32.003331 | orchestrator | 00:01:32.003 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.003338 | orchestrator | 00:01:32.003 STDOUT terraform:  + protocol = "112" 2025-09-18 00:01:32.003378 | orchestrator | 00:01:32.003 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.003411 | orchestrator | 00:01:32.003 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 00:01:32.003451 | orchestrator | 00:01:32.003 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 00:01:32.003482 | orchestrator | 00:01:32.003 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 00:01:32.003518 | orchestrator | 00:01:32.003 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 00:01:32.003558 | orchestrator | 00:01:32.003 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.003563 | orchestrator | 00:01:32.003 STDOUT terraform:  } 2025-09-18 00:01:32.003609 | orchestrator | 00:01:32.003 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-18 00:01:32.003653 | orchestrator | 00:01:32.003 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-18 00:01:32.003686 | orchestrator | 00:01:32.003 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:32.003722 | orchestrator | 00:01:32.003 STDOUT terraform:  + description = "management security group" 2025-09-18 00:01:32.003745 | orchestrator | 00:01:32.003 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.003771 | orchestrator | 00:01:32.003 STDOUT terraform:  + name = "testbed-management" 2025-09-18 00:01:32.003810 | orchestrator | 00:01:32.003 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.003859 | orchestrator | 00:01:32.003 STDOUT terraform:  + stateful = (known after apply) 2025-09-18 00:01:32.003890 | orchestrator | 00:01:32.003 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.003894 | orchestrator | 00:01:32.003 STDOUT terraform:  } 2025-09-18 00:01:32.003948 | orchestrator | 00:01:32.003 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-18 00:01:32.003988 | orchestrator | 00:01:32.003 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-18 00:01:32.004022 | orchestrator | 00:01:32.003 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:32.004044 | orchestrator | 00:01:32.004 STDOUT terraform:  + description = "node security group" 2025-09-18 00:01:32.004077 | orchestrator | 00:01:32.004 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.004102 | orchestrator | 00:01:32.004 STDOUT terraform:  + name = "testbed-node" 2025-09-18 00:01:32.004124 | orchestrator | 00:01:32.004 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.004163 | orchestrator | 00:01:32.004 STDOUT terraform:  + stateful = (known after apply) 2025-09-18 00:01:32.004197 | orchestrator | 00:01:32.004 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.004203 | orchestrator | 00:01:32.004 STDOUT terraform:  } 2025-09-18 00:01:32.004251 | orchestrator | 00:01:32.004 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-18 00:01:32.004296 | orchestrator | 00:01:32.004 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-18 00:01:32.004336 | orchestrator | 00:01:32.004 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 00:01:32.004360 | orchestrator | 00:01:32.004 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-18 00:01:32.004381 | orchestrator | 00:01:32.004 STDOUT terraform:  + dns_nameservers = [ 2025-09-18 00:01:32.004387 | orchestrator | 00:01:32.004 STDOUT terraform:  + "8.8.8.8", 2025-09-18 00:01:32.004402 | orchestrator | 00:01:32.004 STDOUT terraform:  + "9.9.9.9", 2025-09-18 00:01:32.004428 | orchestrator | 00:01:32.004 STDOUT terraform:  ] 2025-09-18 00:01:32.004448 | orchestrator | 00:01:32.004 STDOUT terraform:  + enable_dhcp = true 2025-09-18 00:01:32.004478 | orchestrator | 00:01:32.004 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-18 00:01:32.004508 | orchestrator | 00:01:32.004 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.004533 | orchestrator | 00:01:32.004 STDOUT terraform:  + ip_version = 4 2025-09-18 00:01:32.004562 | orchestrator | 00:01:32.004 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-18 00:01:32.004593 | orchestrator | 00:01:32.004 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-18 00:01:32.004631 | orchestrator | 00:01:32.004 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-18 00:01:32.004670 | orchestrator | 00:01:32.004 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 00:01:32.004687 | orchestrator | 00:01:32.004 STDOUT terraform:  + no_gateway = false 2025-09-18 00:01:32.004721 | orchestrator | 00:01:32.004 STDOUT terraform:  + region = (known after apply) 2025-09-18 00:01:32.004757 | orchestrator | 00:01:32.004 STDOUT terraform:  + service_types = (known after apply) 2025-09-18 00:01:32.004788 | orchestrator | 00:01:32.004 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 00:01:32.004815 | orchestrator | 00:01:32.004 STDOUT terraform:  + allocation_pool { 2025-09-18 00:01:32.004850 | orchestrator | 00:01:32.004 STDOUT terraform:  + end = "192.168.31.250" 2025-09-18 00:01:32.004867 | orchestrator | 00:01:32.004 STDOUT terraform:  + start = "192.168.31.200" 2025-09-18 00:01:32.004873 | orchestrator | 00:01:32.004 STDOUT terraform:  } 2025-09-18 00:01:32.004890 | orchestrator | 00:01:32.004 STDOUT terraform:  } 2025-09-18 00:01:32.004970 | orchestrator | 00:01:32.004 STDOUT terraform:  # terraform_data.image will be created 2025-09-18 00:01:32.004999 | orchestrator | 00:01:32.004 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-18 00:01:32.005024 | orchestrator | 00:01:32.004 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.005046 | orchestrator | 00:01:32.005 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-18 00:01:32.005072 | orchestrator | 00:01:32.005 STDOUT terraform:  + output = (known after apply) 2025-09-18 00:01:32.005078 | orchestrator | 00:01:32.005 STDOUT terraform:  } 2025-09-18 00:01:32.005111 | orchestrator | 00:01:32.005 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-18 00:01:32.005149 | orchestrator | 00:01:32.005 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-18 00:01:32.005173 | orchestrator | 00:01:32.005 STDOUT terraform:  + id = (known after apply) 2025-09-18 00:01:32.005193 | orchestrator | 00:01:32.005 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-18 00:01:32.005217 | orchestrator | 00:01:32.005 STDOUT terraform:  + output = (known after apply) 2025-09-18 00:01:32.005223 | orchestrator | 00:01:32.005 STDOUT terraform:  } 2025-09-18 00:01:32.005254 | orchestrator | 00:01:32.005 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-18 00:01:32.005260 | orchestrator | 00:01:32.005 STDOUT terraform: Changes to Outputs: 2025-09-18 00:01:32.005291 | orchestrator | 00:01:32.005 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-18 00:01:32.005324 | orchestrator | 00:01:32.005 STDOUT terraform:  + private_key = (sensitive value) 2025-09-18 00:01:32.182916 | orchestrator | 00:01:32.182 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-18 00:01:32.185008 | orchestrator | 00:01:32.182 STDOUT terraform: terraform_data.image: Creating... 2025-09-18 00:01:32.185053 | orchestrator | 00:01:32.183 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=54f1135e-bdb8-2f9b-48f4-e5eb6e51deb5] 2025-09-18 00:01:32.185336 | orchestrator | 00:01:32.185 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b118be95-0d0c-deba-f605-5bc8084236ce] 2025-09-18 00:01:32.194754 | orchestrator | 00:01:32.194 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-18 00:01:32.203096 | orchestrator | 00:01:32.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-18 00:01:32.205383 | orchestrator | 00:01:32.205 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-18 00:01:32.217452 | orchestrator | 00:01:32.217 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-18 00:01:32.218242 | orchestrator | 00:01:32.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-18 00:01:32.218701 | orchestrator | 00:01:32.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-18 00:01:32.219178 | orchestrator | 00:01:32.219 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-18 00:01:32.223341 | orchestrator | 00:01:32.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-18 00:01:32.223676 | orchestrator | 00:01:32.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-18 00:01:32.224898 | orchestrator | 00:01:32.224 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-18 00:01:32.640641 | orchestrator | 00:01:32.640 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-18 00:01:32.647717 | orchestrator | 00:01:32.647 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-18 00:01:32.647908 | orchestrator | 00:01:32.647 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-18 00:01:32.652591 | orchestrator | 00:01:32.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-18 00:01:32.734646 | orchestrator | 00:01:32.734 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-18 00:01:32.741337 | orchestrator | 00:01:32.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-18 00:01:33.285165 | orchestrator | 00:01:33.284 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=d919ef3b-e07a-4454-bffe-30eea1fafbb4] 2025-09-18 00:01:33.293612 | orchestrator | 00:01:33.293 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-18 00:01:35.851247 | orchestrator | 00:01:35.850 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=2ccda686-b8eb-476a-b4c1-b925092fcf31] 2025-09-18 00:01:35.858384 | orchestrator | 00:01:35.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-18 00:01:35.868287 | orchestrator | 00:01:35.868 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a477f58a-3bf1-4975-968c-c72809c2667c] 2025-09-18 00:01:35.873731 | orchestrator | 00:01:35.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-18 00:01:35.889937 | orchestrator | 00:01:35.889 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=5ef9a296-1867-4994-9a8d-f57ea224fa97] 2025-09-18 00:01:35.895107 | orchestrator | 00:01:35.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-18 00:01:35.920593 | orchestrator | 00:01:35.920 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=514241af-fcf0-4d5f-9d7d-ad7f828482f8] 2025-09-18 00:01:35.929370 | orchestrator | 00:01:35.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=df5980fc-abe4-45b8-a678-4af06952c2bd] 2025-09-18 00:01:35.931031 | orchestrator | 00:01:35.930 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-18 00:01:35.935207 | orchestrator | 00:01:35.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-18 00:01:35.941764 | orchestrator | 00:01:35.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=6ac3f343-eabf-4363-a559-72345c6aba0d] 2025-09-18 00:01:35.944730 | orchestrator | 00:01:35.944 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=2e4d0087-2785-46b4-8f27-c306f0a9f7ca] 2025-09-18 00:01:35.947687 | orchestrator | 00:01:35.947 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-18 00:01:35.955974 | orchestrator | 00:01:35.955 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-18 00:01:35.960335 | orchestrator | 00:01:35.960 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=cc24949a0b5bfc85dbfcf528a2c09e90e4b821a9] 2025-09-18 00:01:35.965951 | orchestrator | 00:01:35.965 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-18 00:01:35.972077 | orchestrator | 00:01:35.971 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=ff1bb683e718008a917a92a5c4b432c764f223eb] 2025-09-18 00:01:35.972950 | orchestrator | 00:01:35.972 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=b8995dd7-5ece-41d0-bfc6-34744a0d6738] 2025-09-18 00:01:35.976798 | orchestrator | 00:01:35.976 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-18 00:01:36.012763 | orchestrator | 00:01:36.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=79b9ef2b-0416-40aa-a8ac-6a91762c3739] 2025-09-18 00:01:36.645054 | orchestrator | 00:01:36.644 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=a1b46689-8631-4ced-99c9-69cbba2d631b] 2025-09-18 00:01:37.023846 | orchestrator | 00:01:37.023 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=05a723c4-5799-4c4a-a3b7-8068367a36ac] 2025-09-18 00:01:37.032139 | orchestrator | 00:01:37.031 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-18 00:01:39.258079 | orchestrator | 00:01:39.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=736f4a0a-a81c-486b-b717-c1252e00987e] 2025-09-18 00:01:39.314448 | orchestrator | 00:01:39.314 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d0ff69fa-cd5e-473d-a298-9bc83966394f] 2025-09-18 00:01:39.331917 | orchestrator | 00:01:39.331 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=d40cfdb9-09fe-4d78-8a8b-049e8e079a3e] 2025-09-18 00:01:39.348971 | orchestrator | 00:01:39.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=e65f687d-72b1-49af-9153-f020b82bb8f3] 2025-09-18 00:01:39.427193 | orchestrator | 00:01:39.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d275dfba-7189-46c6-ae83-21710451b98e] 2025-09-18 00:01:39.430989 | orchestrator | 00:01:39.430 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=17d48f0a-00a6-4498-b5e8-7bf3aba944cd] 2025-09-18 00:01:40.381010 | orchestrator | 00:01:40.380 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=145e8a45-c9e6-4570-95b1-51728abac326] 2025-09-18 00:01:40.385227 | orchestrator | 00:01:40.385 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-18 00:01:40.386507 | orchestrator | 00:01:40.386 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-18 00:01:40.389309 | orchestrator | 00:01:40.389 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-18 00:01:40.658540 | orchestrator | 00:01:40.658 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=d6a8a932-d040-412f-aadd-6782633d2393] 2025-09-18 00:01:40.674605 | orchestrator | 00:01:40.674 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-18 00:01:40.676234 | orchestrator | 00:01:40.676 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-18 00:01:40.678211 | orchestrator | 00:01:40.678 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-18 00:01:40.682922 | orchestrator | 00:01:40.682 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-18 00:01:40.683293 | orchestrator | 00:01:40.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-18 00:01:40.683415 | orchestrator | 00:01:40.683 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-18 00:01:40.684342 | orchestrator | 00:01:40.684 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-18 00:01:40.685262 | orchestrator | 00:01:40.685 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-18 00:01:40.733501 | orchestrator | 00:01:40.733 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=fe358cf3-15a7-406c-b593-d052da1cfc9d] 2025-09-18 00:01:40.741723 | orchestrator | 00:01:40.741 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-18 00:01:41.099954 | orchestrator | 00:01:41.099 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c3b2a235-97b8-4ff1-abaa-6967940e9900] 2025-09-18 00:01:41.108349 | orchestrator | 00:01:41.108 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-18 00:01:41.319072 | orchestrator | 00:01:41.318 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=b59f8b1f-fe11-4dc4-9d5c-68875b7076e3] 2025-09-18 00:01:41.326738 | orchestrator | 00:01:41.326 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-18 00:01:41.371901 | orchestrator | 00:01:41.371 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=2bab16af-bf29-4522-bb6f-9a5cada27d70] 2025-09-18 00:01:41.377598 | orchestrator | 00:01:41.377 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-18 00:01:41.465956 | orchestrator | 00:01:41.465 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=9308df78-3e4f-4a98-8bb5-88109b79d6b1] 2025-09-18 00:01:41.470143 | orchestrator | 00:01:41.469 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ec3df918-11f5-4518-b0b0-26406f679249] 2025-09-18 00:01:41.474789 | orchestrator | 00:01:41.474 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-18 00:01:41.475962 | orchestrator | 00:01:41.475 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-18 00:01:41.622538 | orchestrator | 00:01:41.622 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=23bc608c-e2ca-4ca3-9d1d-1e2d84f6d390] 2025-09-18 00:01:41.634544 | orchestrator | 00:01:41.634 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-18 00:01:41.730224 | orchestrator | 00:01:41.729 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=998b7301-7e70-4af7-9a2c-2ad278ad77c4] 2025-09-18 00:01:41.738098 | orchestrator | 00:01:41.737 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-18 00:01:41.819261 | orchestrator | 00:01:41.818 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=38fcbbe7-3dee-4dfe-b05d-2948777cf89a] 2025-09-18 00:01:41.877063 | orchestrator | 00:01:41.876 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7142f12c-e772-4255-8da0-bd860ae74171] 2025-09-18 00:01:41.879706 | orchestrator | 00:01:41.879 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=7161c72d-8cb4-40a9-b23c-f8531e5c6e44] 2025-09-18 00:01:41.902531 | orchestrator | 00:01:41.902 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=e8ec83f8-808d-46eb-a5b8-1616060b2b53] 2025-09-18 00:01:42.176585 | orchestrator | 00:01:42.176 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=62ddcdfa-1d24-4b81-87c4-084594507e31] 2025-09-18 00:01:42.256957 | orchestrator | 00:01:42.256 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=de16ad99-6ac4-471c-a8b1-9758402d8581] 2025-09-18 00:01:42.340783 | orchestrator | 00:01:42.340 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=01016ac9-d043-4956-806d-fbe89cb7fd86] 2025-09-18 00:01:43.058075 | orchestrator | 00:01:42.352 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=312f36e2-fde5-4a04-886a-84d1532b3a64] 2025-09-18 00:01:43.058141 | orchestrator | 00:01:42.542 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=529faca9-d670-45bf-8752-9a9ede12f992] 2025-09-18 00:01:43.577064 | orchestrator | 00:01:43.576 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=2e9c44f6-05a6-43c9-bfa6-fbf10e5bec46] 2025-09-18 00:01:43.600092 | orchestrator | 00:01:43.599 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-18 00:01:43.615438 | orchestrator | 00:01:43.615 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-18 00:01:43.621280 | orchestrator | 00:01:43.621 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-18 00:01:43.623037 | orchestrator | 00:01:43.621 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-18 00:01:43.630598 | orchestrator | 00:01:43.630 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-18 00:01:43.631467 | orchestrator | 00:01:43.631 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-18 00:01:43.636660 | orchestrator | 00:01:43.636 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-18 00:01:45.045137 | orchestrator | 00:01:45.044 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=0cc467eb-f277-4805-96f1-470c110bec1b] 2025-09-18 00:01:45.060370 | orchestrator | 00:01:45.060 STDOUT terraform: local_file.inventory: Creating... 2025-09-18 00:01:45.061214 | orchestrator | 00:01:45.061 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-18 00:01:45.063695 | orchestrator | 00:01:45.063 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a4c34420c941307fb06ceccdbbef588c72ffe804] 2025-09-18 00:01:45.064537 | orchestrator | 00:01:45.064 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-18 00:01:45.069469 | orchestrator | 00:01:45.069 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f1f8f3008ea2c8c0177f542be5947ef0394c735b] 2025-09-18 00:01:46.372205 | orchestrator | 00:01:46.371 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0cc467eb-f277-4805-96f1-470c110bec1b] 2025-09-18 00:01:53.621478 | orchestrator | 00:01:53.621 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-18 00:01:53.629460 | orchestrator | 00:01:53.629 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-18 00:01:53.629575 | orchestrator | 00:01:53.629 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-18 00:01:53.631719 | orchestrator | 00:01:53.631 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-18 00:01:53.636049 | orchestrator | 00:01:53.635 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-18 00:01:53.641295 | orchestrator | 00:01:53.641 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-18 00:02:03.621861 | orchestrator | 00:02:03.621 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-18 00:02:03.630091 | orchestrator | 00:02:03.629 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-18 00:02:03.630271 | orchestrator | 00:02:03.629 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-18 00:02:03.632259 | orchestrator | 00:02:03.632 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-18 00:02:03.636628 | orchestrator | 00:02:03.636 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-18 00:02:03.641869 | orchestrator | 00:02:03.641 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-18 00:02:13.624984 | orchestrator | 00:02:13.624 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-18 00:02:13.630092 | orchestrator | 00:02:13.629 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-18 00:02:13.630280 | orchestrator | 00:02:13.630 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-18 00:02:13.633483 | orchestrator | 00:02:13.633 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-18 00:02:13.637725 | orchestrator | 00:02:13.637 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-18 00:02:13.641927 | orchestrator | 00:02:13.641 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-18 00:02:14.403627 | orchestrator | 00:02:14.403 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=bc29c74c-a7f3-495f-976e-1aede5d85481] 2025-09-18 00:02:14.455275 | orchestrator | 00:02:14.454 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=8f06d019-5ee0-43f0-bdc5-f8fc2d8a6eb7] 2025-09-18 00:02:14.613989 | orchestrator | 00:02:14.613 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=1416a167-a60c-43e3-932e-b1c321bacc7f] 2025-09-18 00:02:14.635531 | orchestrator | 00:02:14.635 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=7ab41304-9351-47e6-9773-7ccc79169ae9] 2025-09-18 00:02:23.626737 | orchestrator | 00:02:23.626 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-09-18 00:02:23.630700 | orchestrator | 00:02:23.630 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-09-18 00:02:24.302696 | orchestrator | 00:02:24.302 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 40s [id=16418d7c-e2dd-4c97-ba3e-8358abc8955e] 2025-09-18 00:02:24.328995 | orchestrator | 00:02:24.328 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 40s [id=e33ceeea-2876-41d9-b8b5-10dafe024ce6] 2025-09-18 00:02:24.343493 | orchestrator | 00:02:24.343 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-18 00:02:24.350407 | orchestrator | 00:02:24.350 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1472789214539019988] 2025-09-18 00:02:24.351581 | orchestrator | 00:02:24.351 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-18 00:02:24.372346 | orchestrator | 00:02:24.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-18 00:02:24.372441 | orchestrator | 00:02:24.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-18 00:02:24.372710 | orchestrator | 00:02:24.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-18 00:02:24.373465 | orchestrator | 00:02:24.373 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-18 00:02:24.378764 | orchestrator | 00:02:24.378 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-18 00:02:24.381133 | orchestrator | 00:02:24.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-18 00:02:24.383427 | orchestrator | 00:02:24.383 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-18 00:02:24.396280 | orchestrator | 00:02:24.396 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-18 00:02:24.402000 | orchestrator | 00:02:24.401 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-18 00:02:27.771647 | orchestrator | 00:02:27.771 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=e33ceeea-2876-41d9-b8b5-10dafe024ce6/6ac3f343-eabf-4363-a559-72345c6aba0d] 2025-09-18 00:02:27.788453 | orchestrator | 00:02:27.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=bc29c74c-a7f3-495f-976e-1aede5d85481/5ef9a296-1867-4994-9a8d-f57ea224fa97] 2025-09-18 00:02:27.811880 | orchestrator | 00:02:27.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=16418d7c-e2dd-4c97-ba3e-8358abc8955e/a477f58a-3bf1-4975-968c-c72809c2667c] 2025-09-18 00:02:27.822005 | orchestrator | 00:02:27.821 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=e33ceeea-2876-41d9-b8b5-10dafe024ce6/2ccda686-b8eb-476a-b4c1-b925092fcf31] 2025-09-18 00:02:27.838127 | orchestrator | 00:02:27.837 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=bc29c74c-a7f3-495f-976e-1aede5d85481/b8995dd7-5ece-41d0-bfc6-34744a0d6738] 2025-09-18 00:02:27.864917 | orchestrator | 00:02:27.864 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=16418d7c-e2dd-4c97-ba3e-8358abc8955e/df5980fc-abe4-45b8-a678-4af06952c2bd] 2025-09-18 00:02:33.939273 | orchestrator | 00:02:33.938 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=bc29c74c-a7f3-495f-976e-1aede5d85481/2e4d0087-2785-46b4-8f27-c306f0a9f7ca] 2025-09-18 00:02:33.954189 | orchestrator | 00:02:33.953 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=e33ceeea-2876-41d9-b8b5-10dafe024ce6/514241af-fcf0-4d5f-9d7d-ad7f828482f8] 2025-09-18 00:02:33.980690 | orchestrator | 00:02:33.980 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=16418d7c-e2dd-4c97-ba3e-8358abc8955e/79b9ef2b-0416-40aa-a8ac-6a91762c3739] 2025-09-18 00:02:34.403530 | orchestrator | 00:02:34.403 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-18 00:02:44.403843 | orchestrator | 00:02:44.403 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-18 00:02:44.833481 | orchestrator | 00:02:44.833 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=dbbac7e1-d197-4538-b17d-5b4417487019] 2025-09-18 00:02:44.858727 | orchestrator | 00:02:44.858 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-18 00:02:44.858861 | orchestrator | 00:02:44.858 STDOUT terraform: Outputs: 2025-09-18 00:02:44.858880 | orchestrator | 00:02:44.858 STDOUT terraform: manager_address = 2025-09-18 00:02:44.858892 | orchestrator | 00:02:44.858 STDOUT terraform: private_key = 2025-09-18 00:02:45.103251 | orchestrator | ok: Runtime: 0:01:17.715170 2025-09-18 00:02:45.147040 | 2025-09-18 00:02:45.147230 | TASK [Create infrastructure (stable)] 2025-09-18 00:02:45.681334 | orchestrator | skipping: Conditional result was False 2025-09-18 00:02:45.703250 | 2025-09-18 00:02:45.703531 | TASK [Fetch manager address] 2025-09-18 00:02:46.135185 | orchestrator | ok 2025-09-18 00:02:46.143245 | 2025-09-18 00:02:46.143351 | TASK [Set manager_host address] 2025-09-18 00:02:46.220451 | orchestrator | ok 2025-09-18 00:02:46.229244 | 2025-09-18 00:02:46.229362 | LOOP [Update ansible collections] 2025-09-18 00:02:48.460742 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 00:02:48.461141 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-18 00:02:48.461202 | orchestrator | Starting galaxy collection install process 2025-09-18 00:02:48.461245 | orchestrator | Process install dependency map 2025-09-18 00:02:48.461283 | orchestrator | Starting collection install process 2025-09-18 00:02:48.461318 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-18 00:02:48.461358 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-18 00:02:48.461480 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-18 00:02:48.461577 | orchestrator | ok: Item: commons Runtime: 0:00:01.941101 2025-09-18 00:02:49.171141 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 00:02:49.171359 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-18 00:02:49.171480 | orchestrator | Starting galaxy collection install process 2025-09-18 00:02:49.171533 | orchestrator | Process install dependency map 2025-09-18 00:02:49.171578 | orchestrator | Starting collection install process 2025-09-18 00:02:49.171622 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-18 00:02:49.171665 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-18 00:02:49.171706 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-18 00:02:49.171769 | orchestrator | ok: Item: services Runtime: 0:00:00.490516 2025-09-18 00:02:49.198713 | 2025-09-18 00:02:49.198892 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-18 00:02:59.739016 | orchestrator | ok 2025-09-18 00:02:59.751900 | 2025-09-18 00:02:59.752031 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-18 00:03:59.795569 | orchestrator | ok 2025-09-18 00:03:59.806916 | 2025-09-18 00:03:59.807056 | TASK [Fetch manager ssh hostkey] 2025-09-18 00:04:01.375817 | orchestrator | Output suppressed because no_log was given 2025-09-18 00:04:01.392482 | 2025-09-18 00:04:01.392656 | TASK [Get ssh keypair from terraform environment] 2025-09-18 00:04:01.930649 | orchestrator | ok: Runtime: 0:00:00.009923 2025-09-18 00:04:01.948866 | 2025-09-18 00:04:01.949037 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-18 00:04:01.998813 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-18 00:04:02.021428 | 2025-09-18 00:04:02.021543 | TASK [Run manager part 0] 2025-09-18 00:04:02.842604 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 00:04:02.886076 | orchestrator | 2025-09-18 00:04:02.886124 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-18 00:04:02.886131 | orchestrator | 2025-09-18 00:04:02.886143 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-18 00:04:04.701477 | orchestrator | ok: [testbed-manager] 2025-09-18 00:04:04.701556 | orchestrator | 2025-09-18 00:04:04.701601 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-18 00:04:04.701621 | orchestrator | 2025-09-18 00:04:04.701641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:04:06.478231 | orchestrator | ok: [testbed-manager] 2025-09-18 00:04:06.478350 | orchestrator | 2025-09-18 00:04:06.478363 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-18 00:04:07.113198 | orchestrator | ok: [testbed-manager] 2025-09-18 00:04:07.113276 | orchestrator | 2025-09-18 00:04:07.113295 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-18 00:04:07.160908 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.160957 | orchestrator | 2025-09-18 00:04:07.160967 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-18 00:04:07.189563 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.189607 | orchestrator | 2025-09-18 00:04:07.189614 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-18 00:04:07.216612 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.216653 | orchestrator | 2025-09-18 00:04:07.216660 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-18 00:04:07.240676 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.240716 | orchestrator | 2025-09-18 00:04:07.240722 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-18 00:04:07.265628 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.265672 | orchestrator | 2025-09-18 00:04:07.265680 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-18 00:04:07.292955 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.293052 | orchestrator | 2025-09-18 00:04:07.293062 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-18 00:04:07.322494 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:04:07.322538 | orchestrator | 2025-09-18 00:04:07.322547 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-18 00:04:08.052320 | orchestrator | changed: [testbed-manager] 2025-09-18 00:04:08.052379 | orchestrator | 2025-09-18 00:04:08.052388 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-18 00:06:58.107300 | orchestrator | changed: [testbed-manager] 2025-09-18 00:06:58.107379 | orchestrator | 2025-09-18 00:06:58.107397 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-18 00:08:14.500994 | orchestrator | changed: [testbed-manager] 2025-09-18 00:08:14.501043 | orchestrator | 2025-09-18 00:08:14.501052 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-18 00:08:41.771131 | orchestrator | changed: [testbed-manager] 2025-09-18 00:08:41.771226 | orchestrator | 2025-09-18 00:08:41.771246 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-18 00:08:51.903149 | orchestrator | changed: [testbed-manager] 2025-09-18 00:08:51.903241 | orchestrator | 2025-09-18 00:08:51.903259 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-18 00:08:51.951725 | orchestrator | ok: [testbed-manager] 2025-09-18 00:08:51.951778 | orchestrator | 2025-09-18 00:08:51.951785 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-18 00:08:52.731627 | orchestrator | ok: [testbed-manager] 2025-09-18 00:08:52.731709 | orchestrator | 2025-09-18 00:08:52.731726 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-18 00:08:53.464537 | orchestrator | changed: [testbed-manager] 2025-09-18 00:08:53.464631 | orchestrator | 2025-09-18 00:08:53.464647 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-18 00:08:59.983528 | orchestrator | changed: [testbed-manager] 2025-09-18 00:08:59.983626 | orchestrator | 2025-09-18 00:08:59.983709 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-18 00:09:05.843941 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:05.843988 | orchestrator | 2025-09-18 00:09:05.843998 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-18 00:09:08.303126 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:08.303208 | orchestrator | 2025-09-18 00:09:08.303223 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-18 00:09:10.035564 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:10.035609 | orchestrator | 2025-09-18 00:09:10.035617 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-18 00:09:11.121397 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-18 00:09:11.121437 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-18 00:09:11.121444 | orchestrator | 2025-09-18 00:09:11.121451 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-18 00:09:11.165507 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-18 00:09:11.165559 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-18 00:09:11.165568 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-18 00:09:11.165576 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-18 00:09:14.386340 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-18 00:09:14.386417 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-18 00:09:14.386430 | orchestrator | 2025-09-18 00:09:14.386441 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-18 00:09:14.939952 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:14.940042 | orchestrator | 2025-09-18 00:09:14.940059 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-18 00:09:35.940293 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-18 00:09:35.940370 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-18 00:09:35.940381 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-18 00:09:35.940389 | orchestrator | 2025-09-18 00:09:35.940396 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-18 00:09:38.210863 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-18 00:09:38.210950 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-18 00:09:38.210966 | orchestrator | 2025-09-18 00:09:38.210978 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-18 00:09:38.210991 | orchestrator | 2025-09-18 00:09:38.211002 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:09:39.612296 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:39.612384 | orchestrator | 2025-09-18 00:09:39.612402 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-18 00:09:39.657974 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:39.658087 | orchestrator | 2025-09-18 00:09:39.658103 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-18 00:09:39.714659 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:39.714731 | orchestrator | 2025-09-18 00:09:39.714744 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-18 00:09:42.293389 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:42.293467 | orchestrator | 2025-09-18 00:09:42.293483 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-18 00:09:43.008436 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:43.008473 | orchestrator | 2025-09-18 00:09:43.008479 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-18 00:09:44.447654 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-18 00:09:44.447874 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-18 00:09:44.447893 | orchestrator | 2025-09-18 00:09:44.447924 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-18 00:09:45.820952 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:45.821086 | orchestrator | 2025-09-18 00:09:45.821103 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-18 00:09:47.537422 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:09:47.537529 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-18 00:09:47.537546 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:09:47.537560 | orchestrator | 2025-09-18 00:09:47.537575 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-18 00:09:47.590928 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:47.591029 | orchestrator | 2025-09-18 00:09:47.591045 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-18 00:09:48.144153 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:48.144259 | orchestrator | 2025-09-18 00:09:48.144277 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-18 00:09:48.207751 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:48.207850 | orchestrator | 2025-09-18 00:09:48.207865 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-18 00:09:49.043683 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:09:49.043789 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:49.043806 | orchestrator | 2025-09-18 00:09:49.043840 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-18 00:09:49.083434 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:49.083619 | orchestrator | 2025-09-18 00:09:49.083642 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-18 00:09:49.123043 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:49.123111 | orchestrator | 2025-09-18 00:09:49.123127 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-18 00:09:49.161058 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:49.161112 | orchestrator | 2025-09-18 00:09:49.161127 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-18 00:09:49.208223 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:49.208282 | orchestrator | 2025-09-18 00:09:49.208298 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-18 00:09:49.873448 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:49.874521 | orchestrator | 2025-09-18 00:09:49.874573 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-18 00:09:49.874594 | orchestrator | 2025-09-18 00:09:49.874608 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:09:51.261052 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:51.261176 | orchestrator | 2025-09-18 00:09:51.261193 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-18 00:09:52.210615 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:52.210736 | orchestrator | 2025-09-18 00:09:52.210756 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:09:52.210770 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-18 00:09:52.210781 | orchestrator | 2025-09-18 00:09:52.747486 | orchestrator | ok: Runtime: 0:05:49.994357 2025-09-18 00:09:52.769968 | 2025-09-18 00:09:52.770202 | TASK [Point out that the log in on the manager is now possible] 2025-09-18 00:09:52.811699 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-18 00:09:52.821968 | 2025-09-18 00:09:52.822091 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-18 00:09:52.859629 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-18 00:09:52.869685 | 2025-09-18 00:09:52.869820 | TASK [Run manager part 1 + 2] 2025-09-18 00:09:53.733040 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 00:09:53.793774 | orchestrator | 2025-09-18 00:09:53.793865 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-18 00:09:53.793875 | orchestrator | 2025-09-18 00:09:53.793891 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:09:56.443080 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:56.443168 | orchestrator | 2025-09-18 00:09:56.443197 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-18 00:09:56.480715 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:56.480766 | orchestrator | 2025-09-18 00:09:56.480776 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-18 00:09:56.528143 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:56.528196 | orchestrator | 2025-09-18 00:09:56.528206 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 00:09:56.570435 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:56.570482 | orchestrator | 2025-09-18 00:09:56.570491 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 00:09:56.644132 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:56.644186 | orchestrator | 2025-09-18 00:09:56.644197 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 00:09:56.695444 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:56.695477 | orchestrator | 2025-09-18 00:09:56.695486 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 00:09:56.743080 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-18 00:09:56.743109 | orchestrator | 2025-09-18 00:09:56.743115 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 00:09:57.397163 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:57.397221 | orchestrator | 2025-09-18 00:09:57.397230 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 00:09:57.447097 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:09:57.447150 | orchestrator | 2025-09-18 00:09:57.447160 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 00:09:58.647514 | orchestrator | changed: [testbed-manager] 2025-09-18 00:09:58.647576 | orchestrator | 2025-09-18 00:09:58.647586 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 00:09:59.170686 | orchestrator | ok: [testbed-manager] 2025-09-18 00:09:59.170739 | orchestrator | 2025-09-18 00:09:59.170748 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 00:10:00.225896 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:00.225943 | orchestrator | 2025-09-18 00:10:00.225953 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 00:10:16.043903 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:16.044003 | orchestrator | 2025-09-18 00:10:16.044020 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-18 00:10:16.641772 | orchestrator | ok: [testbed-manager] 2025-09-18 00:10:16.641869 | orchestrator | 2025-09-18 00:10:16.641885 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-18 00:10:16.692545 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:10:16.692600 | orchestrator | 2025-09-18 00:10:16.692607 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-18 00:10:17.557838 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:17.557877 | orchestrator | 2025-09-18 00:10:17.557885 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-18 00:10:18.438717 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:18.438762 | orchestrator | 2025-09-18 00:10:18.438770 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-18 00:10:19.003232 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:19.003309 | orchestrator | 2025-09-18 00:10:19.003325 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-18 00:10:19.057665 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-18 00:10:19.057873 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-18 00:10:19.057898 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-18 00:10:19.057911 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-18 00:10:21.470757 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:21.470886 | orchestrator | 2025-09-18 00:10:21.470904 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-18 00:10:30.498487 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-18 00:10:30.498551 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-18 00:10:30.498563 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-18 00:10:30.498572 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-18 00:10:30.498584 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-18 00:10:30.498592 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-18 00:10:30.498599 | orchestrator | 2025-09-18 00:10:30.498608 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-18 00:10:31.518768 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:31.518896 | orchestrator | 2025-09-18 00:10:31.518916 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-18 00:10:31.565406 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:10:31.565454 | orchestrator | 2025-09-18 00:10:31.565462 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-18 00:10:34.745753 | orchestrator | changed: [testbed-manager] 2025-09-18 00:10:34.745891 | orchestrator | 2025-09-18 00:10:34.745909 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-18 00:10:34.791421 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:10:34.791516 | orchestrator | 2025-09-18 00:10:34.791530 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-18 00:12:07.001319 | orchestrator | changed: [testbed-manager] 2025-09-18 00:12:07.001444 | orchestrator | 2025-09-18 00:12:07.001465 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 00:12:08.146477 | orchestrator | ok: [testbed-manager] 2025-09-18 00:12:08.146591 | orchestrator | 2025-09-18 00:12:08.146610 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:12:08.146625 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-18 00:12:08.146637 | orchestrator | 2025-09-18 00:12:08.502349 | orchestrator | ok: Runtime: 0:02:15.072856 2025-09-18 00:12:08.519734 | 2025-09-18 00:12:08.519889 | TASK [Reboot manager] 2025-09-18 00:12:10.056175 | orchestrator | ok: Runtime: 0:00:00.979810 2025-09-18 00:12:10.073047 | 2025-09-18 00:12:10.073207 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-18 00:12:24.284014 | orchestrator | ok 2025-09-18 00:12:24.296011 | 2025-09-18 00:12:24.296145 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-18 00:13:24.343875 | orchestrator | ok 2025-09-18 00:13:24.354539 | 2025-09-18 00:13:24.354670 | TASK [Deploy manager + bootstrap nodes] 2025-09-18 00:13:26.832814 | orchestrator | 2025-09-18 00:13:26.833022 | orchestrator | # DEPLOY MANAGER 2025-09-18 00:13:26.833050 | orchestrator | 2025-09-18 00:13:26.833067 | orchestrator | + set -e 2025-09-18 00:13:26.833081 | orchestrator | + echo 2025-09-18 00:13:26.833096 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-18 00:13:26.833113 | orchestrator | + echo 2025-09-18 00:13:26.833162 | orchestrator | + cat /opt/manager-vars.sh 2025-09-18 00:13:26.836607 | orchestrator | export NUMBER_OF_NODES=6 2025-09-18 00:13:26.836654 | orchestrator | 2025-09-18 00:13:26.836668 | orchestrator | export CEPH_VERSION=reef 2025-09-18 00:13:26.836714 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-18 00:13:26.836728 | orchestrator | export MANAGER_VERSION=latest 2025-09-18 00:13:26.836753 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-18 00:13:26.836764 | orchestrator | 2025-09-18 00:13:26.836782 | orchestrator | export ARA=false 2025-09-18 00:13:26.836794 | orchestrator | export DEPLOY_MODE=manager 2025-09-18 00:13:26.836814 | orchestrator | export TEMPEST=true 2025-09-18 00:13:26.836827 | orchestrator | export IS_ZUUL=true 2025-09-18 00:13:26.836840 | orchestrator | 2025-09-18 00:13:26.836859 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:13:26.836872 | orchestrator | export EXTERNAL_API=false 2025-09-18 00:13:26.836885 | orchestrator | 2025-09-18 00:13:26.836897 | orchestrator | export IMAGE_USER=ubuntu 2025-09-18 00:13:26.836913 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-18 00:13:26.836925 | orchestrator | 2025-09-18 00:13:26.836938 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-18 00:13:26.837061 | orchestrator | 2025-09-18 00:13:26.837078 | orchestrator | + echo 2025-09-18 00:13:26.837091 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 00:13:26.837701 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 00:13:26.837718 | orchestrator | ++ INTERACTIVE=false 2025-09-18 00:13:26.837819 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 00:13:26.837836 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 00:13:26.837959 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 00:13:26.837975 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 00:13:26.837987 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 00:13:26.837998 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 00:13:26.838008 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 00:13:26.838054 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 00:13:26.838120 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 00:13:26.838135 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 00:13:26.838146 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 00:13:26.838157 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 00:13:26.838178 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 00:13:26.838190 | orchestrator | ++ export ARA=false 2025-09-18 00:13:26.838202 | orchestrator | ++ ARA=false 2025-09-18 00:13:26.838218 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 00:13:26.838229 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 00:13:26.838240 | orchestrator | ++ export TEMPEST=true 2025-09-18 00:13:26.838251 | orchestrator | ++ TEMPEST=true 2025-09-18 00:13:26.838262 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 00:13:26.838273 | orchestrator | ++ IS_ZUUL=true 2025-09-18 00:13:26.838284 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:13:26.838295 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:13:26.838306 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 00:13:26.838318 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 00:13:26.838329 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 00:13:26.838340 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 00:13:26.838355 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 00:13:26.838366 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 00:13:26.838377 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 00:13:26.838388 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 00:13:26.838400 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-18 00:13:26.900880 | orchestrator | + docker version 2025-09-18 00:13:27.183064 | orchestrator | Client: Docker Engine - Community 2025-09-18 00:13:27.183171 | orchestrator | Version: 27.5.1 2025-09-18 00:13:27.183187 | orchestrator | API version: 1.47 2025-09-18 00:13:27.183200 | orchestrator | Go version: go1.22.11 2025-09-18 00:13:27.183210 | orchestrator | Git commit: 9f9e405 2025-09-18 00:13:27.183220 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-18 00:13:27.183230 | orchestrator | OS/Arch: linux/amd64 2025-09-18 00:13:27.183240 | orchestrator | Context: default 2025-09-18 00:13:27.183250 | orchestrator | 2025-09-18 00:13:27.183260 | orchestrator | Server: Docker Engine - Community 2025-09-18 00:13:27.183270 | orchestrator | Engine: 2025-09-18 00:13:27.183280 | orchestrator | Version: 27.5.1 2025-09-18 00:13:27.183291 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-18 00:13:27.183333 | orchestrator | Go version: go1.22.11 2025-09-18 00:13:27.183344 | orchestrator | Git commit: 4c9b3b0 2025-09-18 00:13:27.183353 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-18 00:13:27.183363 | orchestrator | OS/Arch: linux/amd64 2025-09-18 00:13:27.183373 | orchestrator | Experimental: false 2025-09-18 00:13:27.183382 | orchestrator | containerd: 2025-09-18 00:13:27.183392 | orchestrator | Version: 1.7.27 2025-09-18 00:13:27.183402 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-18 00:13:27.183412 | orchestrator | runc: 2025-09-18 00:13:27.183422 | orchestrator | Version: 1.2.5 2025-09-18 00:13:27.183432 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-18 00:13:27.183442 | orchestrator | docker-init: 2025-09-18 00:13:27.183451 | orchestrator | Version: 0.19.0 2025-09-18 00:13:27.183462 | orchestrator | GitCommit: de40ad0 2025-09-18 00:13:27.186545 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-18 00:13:27.209793 | orchestrator | + set -e 2025-09-18 00:13:27.209871 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 00:13:27.209894 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 00:13:27.209916 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 00:13:27.209934 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 00:13:27.209953 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 00:13:27.209971 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 00:13:27.209988 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 00:13:27.210004 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 00:13:27.210084 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 00:13:27.210107 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 00:13:27.210123 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 00:13:27.210139 | orchestrator | ++ export ARA=false 2025-09-18 00:13:27.210158 | orchestrator | ++ ARA=false 2025-09-18 00:13:27.210184 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 00:13:27.210203 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 00:13:27.210221 | orchestrator | ++ export TEMPEST=true 2025-09-18 00:13:27.210238 | orchestrator | ++ TEMPEST=true 2025-09-18 00:13:27.210256 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 00:13:27.210274 | orchestrator | ++ IS_ZUUL=true 2025-09-18 00:13:27.210291 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:13:27.210308 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:13:27.210325 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 00:13:27.210351 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 00:13:27.210370 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 00:13:27.210386 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 00:13:27.210403 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 00:13:27.210420 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 00:13:27.210438 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 00:13:27.210454 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 00:13:27.210487 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 00:13:27.210505 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 00:13:27.210523 | orchestrator | ++ INTERACTIVE=false 2025-09-18 00:13:27.210540 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 00:13:27.210562 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 00:13:27.210579 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 00:13:27.210597 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 00:13:27.210614 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-18 00:13:27.218331 | orchestrator | + set -e 2025-09-18 00:13:27.218375 | orchestrator | + VERSION=reef 2025-09-18 00:13:27.220141 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-18 00:13:27.226417 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-18 00:13:27.226456 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-18 00:13:27.231862 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-18 00:13:27.237813 | orchestrator | + set -e 2025-09-18 00:13:27.237840 | orchestrator | + VERSION=2024.2 2025-09-18 00:13:27.238259 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-18 00:13:27.242738 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-18 00:13:27.242784 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-18 00:13:27.248035 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-18 00:13:27.249087 | orchestrator | ++ semver latest 7.0.0 2025-09-18 00:13:27.319169 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 00:13:27.319274 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 00:13:27.319292 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-18 00:13:27.319306 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-18 00:13:27.418628 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-18 00:13:27.421038 | orchestrator | + source /opt/venv/bin/activate 2025-09-18 00:13:27.422266 | orchestrator | ++ deactivate nondestructive 2025-09-18 00:13:27.422305 | orchestrator | ++ '[' -n '' ']' 2025-09-18 00:13:27.422318 | orchestrator | ++ '[' -n '' ']' 2025-09-18 00:13:27.422330 | orchestrator | ++ hash -r 2025-09-18 00:13:27.422350 | orchestrator | ++ '[' -n '' ']' 2025-09-18 00:13:27.422362 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-18 00:13:27.422373 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-18 00:13:27.422385 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-18 00:13:27.422603 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-18 00:13:27.422636 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-18 00:13:27.422648 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-18 00:13:27.422659 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-18 00:13:27.422800 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 00:13:27.422822 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 00:13:27.422834 | orchestrator | ++ export PATH 2025-09-18 00:13:27.422845 | orchestrator | ++ '[' -n '' ']' 2025-09-18 00:13:27.423043 | orchestrator | ++ '[' -z '' ']' 2025-09-18 00:13:27.423058 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-18 00:13:27.423070 | orchestrator | ++ PS1='(venv) ' 2025-09-18 00:13:27.423081 | orchestrator | ++ export PS1 2025-09-18 00:13:27.423092 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-18 00:13:27.423103 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-18 00:13:27.423114 | orchestrator | ++ hash -r 2025-09-18 00:13:27.423320 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-18 00:13:28.653601 | orchestrator | 2025-09-18 00:13:28.653776 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-18 00:13:28.653796 | orchestrator | 2025-09-18 00:13:28.653864 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 00:13:29.204309 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:29.204410 | orchestrator | 2025-09-18 00:13:29.204427 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-18 00:13:30.143273 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:30.143383 | orchestrator | 2025-09-18 00:13:30.143398 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-18 00:13:30.143410 | orchestrator | 2025-09-18 00:13:30.143420 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:13:33.470094 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:33.470218 | orchestrator | 2025-09-18 00:13:33.470245 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-18 00:13:33.525986 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:33.526097 | orchestrator | 2025-09-18 00:13:33.526115 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-18 00:13:33.999636 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:33.999796 | orchestrator | 2025-09-18 00:13:33.999814 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-18 00:13:34.044276 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:13:34.044380 | orchestrator | 2025-09-18 00:13:34.044395 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-18 00:13:34.394955 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:34.395056 | orchestrator | 2025-09-18 00:13:34.395070 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-18 00:13:34.453027 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:13:34.453112 | orchestrator | 2025-09-18 00:13:34.453127 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-18 00:13:34.778984 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:34.779307 | orchestrator | 2025-09-18 00:13:34.779328 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-18 00:13:34.894342 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:13:34.894463 | orchestrator | 2025-09-18 00:13:34.894491 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-18 00:13:34.894512 | orchestrator | 2025-09-18 00:13:34.894537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:13:36.636262 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:36.636383 | orchestrator | 2025-09-18 00:13:36.636398 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-18 00:13:36.732488 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-18 00:13:36.732609 | orchestrator | 2025-09-18 00:13:36.732624 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-18 00:13:36.791390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-18 00:13:36.791490 | orchestrator | 2025-09-18 00:13:36.791505 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-18 00:13:37.891614 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-18 00:13:37.891784 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-18 00:13:37.891800 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-18 00:13:37.891813 | orchestrator | 2025-09-18 00:13:37.891826 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-18 00:13:39.660569 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-18 00:13:39.660778 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-18 00:13:39.661521 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-18 00:13:39.661543 | orchestrator | 2025-09-18 00:13:39.661556 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-18 00:13:40.280807 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:13:40.280929 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:40.280945 | orchestrator | 2025-09-18 00:13:40.280958 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-18 00:13:40.932364 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:13:40.932487 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:40.932504 | orchestrator | 2025-09-18 00:13:40.932518 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-18 00:13:40.991275 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:13:40.991400 | orchestrator | 2025-09-18 00:13:40.991415 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-18 00:13:41.346843 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:41.346959 | orchestrator | 2025-09-18 00:13:41.346975 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-18 00:13:41.417255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-18 00:13:41.417359 | orchestrator | 2025-09-18 00:13:41.417374 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-18 00:13:42.491926 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:42.492054 | orchestrator | 2025-09-18 00:13:42.492071 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-18 00:13:43.286521 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:43.286647 | orchestrator | 2025-09-18 00:13:43.286729 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-18 00:13:55.255979 | orchestrator | changed: [testbed-manager] 2025-09-18 00:13:55.256105 | orchestrator | 2025-09-18 00:13:55.256122 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-18 00:13:55.309352 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:13:55.309410 | orchestrator | 2025-09-18 00:13:55.309426 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-18 00:13:55.309440 | orchestrator | 2025-09-18 00:13:55.309451 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:13:57.136326 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:57.136412 | orchestrator | 2025-09-18 00:13:57.136453 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-18 00:13:57.249736 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-18 00:13:57.249814 | orchestrator | 2025-09-18 00:13:57.249823 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-18 00:13:57.310325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:13:57.310415 | orchestrator | 2025-09-18 00:13:57.310430 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-18 00:13:59.895258 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:59.895372 | orchestrator | 2025-09-18 00:13:59.895391 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-18 00:13:59.950154 | orchestrator | ok: [testbed-manager] 2025-09-18 00:13:59.950203 | orchestrator | 2025-09-18 00:13:59.950219 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-18 00:14:00.083888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-18 00:14:00.083947 | orchestrator | 2025-09-18 00:14:00.083960 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-18 00:14:02.930413 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-18 00:14:02.930524 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-18 00:14:02.930539 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-18 00:14:02.930551 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-18 00:14:02.930562 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-18 00:14:02.930574 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-18 00:14:02.930585 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-18 00:14:02.930596 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-18 00:14:02.930608 | orchestrator | 2025-09-18 00:14:02.930620 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-18 00:14:03.549123 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:03.549257 | orchestrator | 2025-09-18 00:14:03.549275 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-18 00:14:04.222751 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:04.222883 | orchestrator | 2025-09-18 00:14:04.222903 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-18 00:14:04.301081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-18 00:14:04.301185 | orchestrator | 2025-09-18 00:14:04.301199 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-18 00:14:05.549908 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-18 00:14:05.550974 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-18 00:14:05.551011 | orchestrator | 2025-09-18 00:14:05.551025 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-18 00:14:06.176191 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:06.176287 | orchestrator | 2025-09-18 00:14:06.176303 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-18 00:14:06.231843 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:14:06.231871 | orchestrator | 2025-09-18 00:14:06.231884 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-18 00:14:06.298197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-18 00:14:06.298223 | orchestrator | 2025-09-18 00:14:06.298235 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-18 00:14:06.891500 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:06.891586 | orchestrator | 2025-09-18 00:14:06.891599 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-18 00:14:06.943682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-18 00:14:06.943759 | orchestrator | 2025-09-18 00:14:06.943772 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-18 00:14:08.287447 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:14:08.287554 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:14:08.287571 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:08.287585 | orchestrator | 2025-09-18 00:14:08.287597 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-18 00:14:08.903087 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:08.903198 | orchestrator | 2025-09-18 00:14:08.903215 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-18 00:14:08.955424 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:14:08.955479 | orchestrator | 2025-09-18 00:14:08.955493 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-18 00:14:09.051083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-18 00:14:09.051168 | orchestrator | 2025-09-18 00:14:09.051183 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-18 00:14:09.579849 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:09.579940 | orchestrator | 2025-09-18 00:14:09.579953 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-18 00:14:09.962107 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:09.962202 | orchestrator | 2025-09-18 00:14:09.962217 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-18 00:14:11.203346 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-18 00:14:11.204131 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-18 00:14:11.204159 | orchestrator | 2025-09-18 00:14:11.204167 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-18 00:14:11.846930 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:11.847028 | orchestrator | 2025-09-18 00:14:11.847044 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-18 00:14:12.254394 | orchestrator | ok: [testbed-manager] 2025-09-18 00:14:12.254491 | orchestrator | 2025-09-18 00:14:12.254506 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-18 00:14:12.609527 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:12.609614 | orchestrator | 2025-09-18 00:14:12.609629 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-18 00:14:12.655826 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:14:12.655848 | orchestrator | 2025-09-18 00:14:12.655860 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-18 00:14:12.726692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-18 00:14:12.726769 | orchestrator | 2025-09-18 00:14:12.726782 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-18 00:14:12.769164 | orchestrator | ok: [testbed-manager] 2025-09-18 00:14:12.769238 | orchestrator | 2025-09-18 00:14:12.769252 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-18 00:14:14.795934 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-18 00:14:14.796044 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-18 00:14:14.796060 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-18 00:14:14.796071 | orchestrator | 2025-09-18 00:14:14.796084 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-18 00:14:15.492370 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:15.492468 | orchestrator | 2025-09-18 00:14:15.492486 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-18 00:14:16.180358 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:16.180449 | orchestrator | 2025-09-18 00:14:16.180465 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-18 00:14:16.886209 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:16.886318 | orchestrator | 2025-09-18 00:14:16.886336 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-18 00:14:16.959695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-18 00:14:16.959763 | orchestrator | 2025-09-18 00:14:16.959777 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-18 00:14:16.996545 | orchestrator | ok: [testbed-manager] 2025-09-18 00:14:16.996582 | orchestrator | 2025-09-18 00:14:16.996595 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-18 00:14:17.712844 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-18 00:14:17.712946 | orchestrator | 2025-09-18 00:14:17.712963 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-18 00:14:17.791195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-18 00:14:17.791279 | orchestrator | 2025-09-18 00:14:17.791293 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-18 00:14:18.504474 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:18.504552 | orchestrator | 2025-09-18 00:14:18.504561 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-18 00:14:19.064749 | orchestrator | ok: [testbed-manager] 2025-09-18 00:14:19.064827 | orchestrator | 2025-09-18 00:14:19.064836 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-18 00:14:19.118937 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:14:19.118986 | orchestrator | 2025-09-18 00:14:19.118995 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-18 00:14:19.178410 | orchestrator | ok: [testbed-manager] 2025-09-18 00:14:19.178461 | orchestrator | 2025-09-18 00:14:19.178469 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-18 00:14:19.975081 | orchestrator | changed: [testbed-manager] 2025-09-18 00:14:19.975187 | orchestrator | 2025-09-18 00:14:19.975204 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-18 00:15:28.954934 | orchestrator | changed: [testbed-manager] 2025-09-18 00:15:28.955058 | orchestrator | 2025-09-18 00:15:28.955077 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-18 00:15:30.936081 | orchestrator | ok: [testbed-manager] 2025-09-18 00:15:30.936184 | orchestrator | 2025-09-18 00:15:30.936199 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-18 00:15:30.988806 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:15:30.988890 | orchestrator | 2025-09-18 00:15:30.988906 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-18 00:15:35.001243 | orchestrator | changed: [testbed-manager] 2025-09-18 00:15:35.001362 | orchestrator | 2025-09-18 00:15:35.001380 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-18 00:15:35.060421 | orchestrator | ok: [testbed-manager] 2025-09-18 00:15:35.060485 | orchestrator | 2025-09-18 00:15:35.060502 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-18 00:15:35.060514 | orchestrator | 2025-09-18 00:15:35.060525 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-18 00:15:35.102787 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:15:35.102874 | orchestrator | 2025-09-18 00:15:35.102889 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-18 00:16:35.161799 | orchestrator | Pausing for 60 seconds 2025-09-18 00:16:35.161954 | orchestrator | changed: [testbed-manager] 2025-09-18 00:16:35.161974 | orchestrator | 2025-09-18 00:16:35.161988 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-18 00:16:39.718840 | orchestrator | changed: [testbed-manager] 2025-09-18 00:16:39.718973 | orchestrator | 2025-09-18 00:16:39.718990 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-18 00:17:21.373693 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-18 00:17:21.373817 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-18 00:17:21.373836 | orchestrator | changed: [testbed-manager] 2025-09-18 00:17:21.373881 | orchestrator | 2025-09-18 00:17:21.373894 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-18 00:17:29.806947 | orchestrator | changed: [testbed-manager] 2025-09-18 00:17:29.807049 | orchestrator | 2025-09-18 00:17:29.807067 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-18 00:17:29.881741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-18 00:17:29.881794 | orchestrator | 2025-09-18 00:17:29.881808 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-18 00:17:29.881820 | orchestrator | 2025-09-18 00:17:29.881831 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-18 00:17:29.926474 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:17:29.926516 | orchestrator | 2025-09-18 00:17:29.926529 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:17:29.926541 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-18 00:17:29.926553 | orchestrator | 2025-09-18 00:17:29.992543 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-18 00:17:29.992628 | orchestrator | + deactivate 2025-09-18 00:17:29.992643 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-18 00:17:29.992656 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 00:17:29.992667 | orchestrator | + export PATH 2025-09-18 00:17:29.992678 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-18 00:17:29.992689 | orchestrator | + '[' -n '' ']' 2025-09-18 00:17:29.992701 | orchestrator | + hash -r 2025-09-18 00:17:29.992731 | orchestrator | + '[' -n '' ']' 2025-09-18 00:17:29.992742 | orchestrator | + unset VIRTUAL_ENV 2025-09-18 00:17:29.992753 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-18 00:17:29.992765 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-18 00:17:29.992776 | orchestrator | + unset -f deactivate 2025-09-18 00:17:29.992798 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-18 00:17:29.998843 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-18 00:17:29.998916 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-18 00:17:29.998932 | orchestrator | + local max_attempts=60 2025-09-18 00:17:29.998946 | orchestrator | + local name=ceph-ansible 2025-09-18 00:17:29.998958 | orchestrator | + local attempt_num=1 2025-09-18 00:17:29.999195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:17:30.023098 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:17:30.023160 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-18 00:17:30.023172 | orchestrator | + local max_attempts=60 2025-09-18 00:17:30.023184 | orchestrator | + local name=kolla-ansible 2025-09-18 00:17:30.023195 | orchestrator | + local attempt_num=1 2025-09-18 00:17:30.023740 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-18 00:17:30.054665 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:17:30.054729 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-18 00:17:30.054740 | orchestrator | + local max_attempts=60 2025-09-18 00:17:30.054750 | orchestrator | + local name=osism-ansible 2025-09-18 00:17:30.054759 | orchestrator | + local attempt_num=1 2025-09-18 00:17:30.055616 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-18 00:17:30.088300 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:17:30.088362 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-18 00:17:30.088373 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-18 00:17:30.741945 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-18 00:17:30.927892 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-18 00:17:30.927971 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.927986 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928021 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-18 00:17:30.928034 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-18 00:17:30.928055 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928067 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928078 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-09-18 00:17:30.928089 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928100 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-18 00:17:30.928111 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928122 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-18 00:17:30.928133 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928143 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-18 00:17:30.928154 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.928165 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-18 00:17:30.935274 | orchestrator | ++ semver latest 7.0.0 2025-09-18 00:17:30.979848 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 00:17:30.979913 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 00:17:30.979927 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-18 00:17:30.983244 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-18 00:17:42.892513 | orchestrator | 2025-09-18 00:17:42 | INFO  | Task 990a3cbd-034c-43cf-bf4f-903567c75d1e (resolvconf) was prepared for execution. 2025-09-18 00:17:42.892663 | orchestrator | 2025-09-18 00:17:42 | INFO  | It takes a moment until task 990a3cbd-034c-43cf-bf4f-903567c75d1e (resolvconf) has been started and output is visible here. 2025-09-18 00:17:56.091792 | orchestrator | 2025-09-18 00:17:56.091910 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-18 00:17:56.091928 | orchestrator | 2025-09-18 00:17:56.091940 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:17:56.091980 | orchestrator | Thursday 18 September 2025 00:17:46 +0000 (0:00:00.144) 0:00:00.144 **** 2025-09-18 00:17:56.091993 | orchestrator | ok: [testbed-manager] 2025-09-18 00:17:56.092006 | orchestrator | 2025-09-18 00:17:56.092018 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-18 00:17:56.092030 | orchestrator | Thursday 18 September 2025 00:17:50 +0000 (0:00:03.686) 0:00:03.831 **** 2025-09-18 00:17:56.092041 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:17:56.092053 | orchestrator | 2025-09-18 00:17:56.092064 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-18 00:17:56.092075 | orchestrator | Thursday 18 September 2025 00:17:50 +0000 (0:00:00.063) 0:00:03.895 **** 2025-09-18 00:17:56.092086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-18 00:17:56.092098 | orchestrator | 2025-09-18 00:17:56.092109 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-18 00:17:56.092120 | orchestrator | Thursday 18 September 2025 00:17:50 +0000 (0:00:00.072) 0:00:03.968 **** 2025-09-18 00:17:56.092132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:17:56.092143 | orchestrator | 2025-09-18 00:17:56.092154 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-18 00:17:56.092165 | orchestrator | Thursday 18 September 2025 00:17:50 +0000 (0:00:00.061) 0:00:04.029 **** 2025-09-18 00:17:56.092176 | orchestrator | ok: [testbed-manager] 2025-09-18 00:17:56.092186 | orchestrator | 2025-09-18 00:17:56.092197 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-18 00:17:56.092208 | orchestrator | Thursday 18 September 2025 00:17:51 +0000 (0:00:01.035) 0:00:05.064 **** 2025-09-18 00:17:56.092219 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:17:56.092230 | orchestrator | 2025-09-18 00:17:56.092241 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-18 00:17:56.092252 | orchestrator | Thursday 18 September 2025 00:17:51 +0000 (0:00:00.064) 0:00:05.129 **** 2025-09-18 00:17:56.092262 | orchestrator | ok: [testbed-manager] 2025-09-18 00:17:56.092273 | orchestrator | 2025-09-18 00:17:56.092284 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-18 00:17:56.092295 | orchestrator | Thursday 18 September 2025 00:17:52 +0000 (0:00:00.457) 0:00:05.586 **** 2025-09-18 00:17:56.092306 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:17:56.092317 | orchestrator | 2025-09-18 00:17:56.092328 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-18 00:17:56.092340 | orchestrator | Thursday 18 September 2025 00:17:52 +0000 (0:00:00.079) 0:00:05.666 **** 2025-09-18 00:17:56.092351 | orchestrator | changed: [testbed-manager] 2025-09-18 00:17:56.092362 | orchestrator | 2025-09-18 00:17:56.092372 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-18 00:17:56.092383 | orchestrator | Thursday 18 September 2025 00:17:52 +0000 (0:00:00.562) 0:00:06.229 **** 2025-09-18 00:17:56.092394 | orchestrator | changed: [testbed-manager] 2025-09-18 00:17:56.092405 | orchestrator | 2025-09-18 00:17:56.092416 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-18 00:17:56.092426 | orchestrator | Thursday 18 September 2025 00:17:53 +0000 (0:00:01.044) 0:00:07.273 **** 2025-09-18 00:17:56.092437 | orchestrator | ok: [testbed-manager] 2025-09-18 00:17:56.092448 | orchestrator | 2025-09-18 00:17:56.092459 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-18 00:17:56.092470 | orchestrator | Thursday 18 September 2025 00:17:54 +0000 (0:00:00.935) 0:00:08.209 **** 2025-09-18 00:17:56.092491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-18 00:17:56.092511 | orchestrator | 2025-09-18 00:17:56.092522 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-18 00:17:56.092533 | orchestrator | Thursday 18 September 2025 00:17:54 +0000 (0:00:00.068) 0:00:08.277 **** 2025-09-18 00:17:56.092608 | orchestrator | changed: [testbed-manager] 2025-09-18 00:17:56.092620 | orchestrator | 2025-09-18 00:17:56.092631 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:17:56.092643 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:17:56.092654 | orchestrator | 2025-09-18 00:17:56.092665 | orchestrator | 2025-09-18 00:17:56.092676 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:17:56.092687 | orchestrator | Thursday 18 September 2025 00:17:55 +0000 (0:00:01.091) 0:00:09.369 **** 2025-09-18 00:17:56.092697 | orchestrator | =============================================================================== 2025-09-18 00:17:56.092708 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2025-09-18 00:17:56.092719 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-09-18 00:17:56.092729 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-09-18 00:17:56.092740 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2025-09-18 00:17:56.092751 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-09-18 00:17:56.092761 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2025-09-18 00:17:56.092791 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-09-18 00:17:56.092802 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-18 00:17:56.092813 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-18 00:17:56.092824 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-09-18 00:17:56.092835 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-18 00:17:56.092845 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-18 00:17:56.092856 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-09-18 00:17:56.335711 | orchestrator | + osism apply sshconfig 2025-09-18 00:18:08.311994 | orchestrator | 2025-09-18 00:18:08 | INFO  | Task 921383d7-1d11-4e57-853c-4c866a9a13ce (sshconfig) was prepared for execution. 2025-09-18 00:18:08.312123 | orchestrator | 2025-09-18 00:18:08 | INFO  | It takes a moment until task 921383d7-1d11-4e57-853c-4c866a9a13ce (sshconfig) has been started and output is visible here. 2025-09-18 00:18:19.696244 | orchestrator | 2025-09-18 00:18:19.696388 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-18 00:18:19.696407 | orchestrator | 2025-09-18 00:18:19.696419 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-18 00:18:19.696431 | orchestrator | Thursday 18 September 2025 00:18:12 +0000 (0:00:00.173) 0:00:00.173 **** 2025-09-18 00:18:19.696479 | orchestrator | ok: [testbed-manager] 2025-09-18 00:18:19.696494 | orchestrator | 2025-09-18 00:18:19.696505 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-18 00:18:19.696517 | orchestrator | Thursday 18 September 2025 00:18:12 +0000 (0:00:00.589) 0:00:00.762 **** 2025-09-18 00:18:19.696528 | orchestrator | changed: [testbed-manager] 2025-09-18 00:18:19.696539 | orchestrator | 2025-09-18 00:18:19.696550 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-18 00:18:19.696562 | orchestrator | Thursday 18 September 2025 00:18:13 +0000 (0:00:00.487) 0:00:01.250 **** 2025-09-18 00:18:19.696609 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-18 00:18:19.696621 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-18 00:18:19.696656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-18 00:18:19.696668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-18 00:18:19.696679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-18 00:18:19.696707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-18 00:18:19.696719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-18 00:18:19.696729 | orchestrator | 2025-09-18 00:18:19.696740 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-18 00:18:19.696751 | orchestrator | Thursday 18 September 2025 00:18:18 +0000 (0:00:05.612) 0:00:06.862 **** 2025-09-18 00:18:19.696762 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:18:19.696772 | orchestrator | 2025-09-18 00:18:19.696783 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-18 00:18:19.696796 | orchestrator | Thursday 18 September 2025 00:18:18 +0000 (0:00:00.061) 0:00:06.923 **** 2025-09-18 00:18:19.696808 | orchestrator | changed: [testbed-manager] 2025-09-18 00:18:19.696821 | orchestrator | 2025-09-18 00:18:19.696833 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:18:19.696846 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:18:19.696859 | orchestrator | 2025-09-18 00:18:19.696872 | orchestrator | 2025-09-18 00:18:19.696884 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:18:19.696896 | orchestrator | Thursday 18 September 2025 00:18:19 +0000 (0:00:00.545) 0:00:07.469 **** 2025-09-18 00:18:19.696909 | orchestrator | =============================================================================== 2025-09-18 00:18:19.696921 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.61s 2025-09-18 00:18:19.696933 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-09-18 00:18:19.696947 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-09-18 00:18:19.696959 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-09-18 00:18:19.696972 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-18 00:18:19.943228 | orchestrator | + osism apply known-hosts 2025-09-18 00:18:31.821223 | orchestrator | 2025-09-18 00:18:31 | INFO  | Task bb5e67c7-528a-4e50-a9b9-0079eea23fca (known-hosts) was prepared for execution. 2025-09-18 00:18:31.821351 | orchestrator | 2025-09-18 00:18:31 | INFO  | It takes a moment until task bb5e67c7-528a-4e50-a9b9-0079eea23fca (known-hosts) has been started and output is visible here. 2025-09-18 00:18:47.606286 | orchestrator | 2025-09-18 00:18:47.606385 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-18 00:18:47.606398 | orchestrator | 2025-09-18 00:18:47.606408 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-18 00:18:47.606418 | orchestrator | Thursday 18 September 2025 00:18:35 +0000 (0:00:00.121) 0:00:00.121 **** 2025-09-18 00:18:47.606428 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-18 00:18:47.606438 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-18 00:18:47.606447 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-18 00:18:47.606456 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-18 00:18:47.606466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-18 00:18:47.606475 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-18 00:18:47.606483 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-18 00:18:47.606492 | orchestrator | 2025-09-18 00:18:47.606502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-18 00:18:47.606512 | orchestrator | Thursday 18 September 2025 00:18:41 +0000 (0:00:05.711) 0:00:05.832 **** 2025-09-18 00:18:47.606538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-18 00:18:47.606549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-18 00:18:47.606587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-18 00:18:47.606596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-18 00:18:47.606605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-18 00:18:47.606621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-18 00:18:47.606631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-18 00:18:47.606639 | orchestrator | 2025-09-18 00:18:47.606648 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.606657 | orchestrator | Thursday 18 September 2025 00:18:41 +0000 (0:00:00.155) 0:00:05.987 **** 2025-09-18 00:18:47.606666 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5Ci8tG6cRQd6CW0uFjK+DuxvqAYzjcoPEwhVBTG2gy) 2025-09-18 00:18:47.606679 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrrLOna6Nr7t672ZzOAgTmB/VFBGPmRY4+QUrDVU10tECt9DURZJuw3xun/gKn9kwCOUSX7wtzLJoHx9m6VOlgWpziLmWMSqxicSW3KClZmhQzdQK5D9gdnOARP6sjDrPWeqSbyKFcuPftKdjB/FXaeD4n4OoWJ0rSY/mAdt1q3kieoCf/nvZjeWo3iLXPBiYAuMuPBwEaHG2nBz3XJ+wbH+UlVJEYUJU2hoNBMadsF5OmK5W/GlpmSiJXEXPstePELHC14N8DZpbyWDIcdADafdm2VJcT891jMrOB/pAF4US0AP7yvl/pCkxmXTIbVsijI/ZgAM8JJwMdYkxvviB//buomE05X3AGVLOb0va3ltw5uoksV1tZ5eOEQsoJ57qVPO18XqmHn4K4xeFLLtwWSRlsKFddIXO6sYiDvyxweQ0WqT+EMey8Cb0CC40Q5Ezq11s3pnszFNCUcCKB5ziCiNEaLH+dHkdUUYnQ4rcTR53wBg3+dZ4/IJfij7m4Sc=) 2025-09-18 00:18:47.606691 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF26HYXLEm7qrEv3YLbpd0Ad5ssFCuTq48KrRwYgC9orV64siDZNYEfHvda3lPmJV2Yq9NyAC5M459qJMKFsuhs=) 2025-09-18 00:18:47.606701 | orchestrator | 2025-09-18 00:18:47.606710 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.606719 | orchestrator | Thursday 18 September 2025 00:18:42 +0000 (0:00:01.120) 0:00:07.107 **** 2025-09-18 00:18:47.606742 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHRpuYO/48g8vw4dxLrkXM/GLqCIrvkbo4os5Xd7otCoey0FQ52rLHpaFF1MUnH73dDMpAbxQTfYog9+mmymYRvt4Ce/0qRRfxbwlWyZ6N3k4sjbdjIen0wArr/tMl1g2W/qm0gVaz/g4c1DVxnGvesUUIGzQ5zuX5zE3evWQmTWKWG9w9x3jZaXHK60K83f6lgX+y7uhDP9yyshdMlc3P39dbSPYNxYZKqoFvdbOS2IQqju1u2b6ktNa7wwimZYTxpgBA8Wgv01QpNJYJ9z0b0TUuoP+VDeDmbdXPu5pn6efnM8LqOwldNhYT6z1/P0Su1HznV70ajpmUz7f/GfDw/CH/yqFYrdGw2a/ooVhLQ1h5pO/hOUApsd96ML5eURTrclTWpqEE10B9P9q8LC6eNESzqAbIDxYXdeD1sw0tgb6vFc00MoQZO35xHTaGE+kaTMyoQVaCjJE2qdY/loqZdUXE7fomqJeDApW+USvs1vstNP3ChZvIyPofOTaXKV0=) 2025-09-18 00:18:47.606753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRdn+PHYzzZsih5j6IIpHpNPIyVb35CfMwWvXquzPnY) 2025-09-18 00:18:47.606761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJY7aql51LzeNj27n4IJ7PN3LnLIMzs4P5Lk4SaOwzlTBlAw86KfN6D4FILUcU47GxOI6IeBoBFASP0nEZ9E9QU=) 2025-09-18 00:18:47.606777 | orchestrator | 2025-09-18 00:18:47.606786 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.606794 | orchestrator | Thursday 18 September 2025 00:18:43 +0000 (0:00:01.021) 0:00:08.129 **** 2025-09-18 00:18:47.606803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEjgtZEVkHFr4VHuAWAZ4Z/U5zrUJxa/3eRHZjSvNSDkkYUD311n0M3RKvf/gSHno4L7T1TlF1F+FagJXX0fnk=) 2025-09-18 00:18:47.606812 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTjUstW2GDC1qGRoMEsufbOJmKkK69Au7Y+GeFlJjkK) 2025-09-18 00:18:47.606821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY8DWiArYe/9pZMJ1G6jd2PwWgx7a8PY623aOQ69oPgUjScwkCn4stV9gMx6xvqhKpzwKcw2Fl9E9DTe66lvubleclf6hLdr3Oo/jW73haT9BlT64p44SBSfd2J596Wp762sfttzRtvHbS6CNYm4Ide2uyCoK8BZ246bDmBFQ9n+YBGxIMFSFJ0qpI4J26b8yDmZ5RSPlJste51+hVdbc0zk+TperZwkOeBxJQRv2p2IGB+SGEhZsJUAainc9lXyFQUzpiouskDh3bXyBthlxT7NN87JFbQxVLda1OQVIM3Sp42696dQsGV7Jv2iS7K5c/jtYaOfY4P1IzoepqRNC3ZDu/9+4aRJLENKgBXeVTu8gbNgcUIyiuWTv5sO+p8pO4ve1GkTRgCJA1S4TuRQ/DjjSBN422DJgZoeUEn/taqGygPfZbQKBdf6EgLCnb/K4xy7hYlb13hvHf8QoYJf/NG8VeBFUH5iPMlQO0IBmuUc/8lJ5D78BWTKkG2Q27Chk=) 2025-09-18 00:18:47.606830 | orchestrator | 2025-09-18 00:18:47.606841 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.606851 | orchestrator | Thursday 18 September 2025 00:18:44 +0000 (0:00:01.015) 0:00:09.145 **** 2025-09-18 00:18:47.606909 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWLO1+ln2OvQ5I3gNIrxrC5oeTRnWR+Ak2i03DY4FpVpVVYAlGCl5mXNqAtJqEUxNe4kEujJ/S12lOD9FhB/4QwIWONjkas+2KtChyaNM2s3Fny9vqv4gdQ75gkMrO/bac6btvoHPLtb/tK1KyEojuWnkZnrWGpa++bQJx4FjHoj4EQhedOXVcoTgXEchh4JcceSVMmLLfq4vrAbt/yyYXoc/zwszT82eKk1qXw4dn6BouHvHTyMLywyHEOfuCFLRzy3th2MIIM5xhPo8BxSuL7aQtoVuimYDRSlbAaImazTAU9o/raiSLY+XwIrshH36Dyn21+rArgef9/Hpk4QWk1tU6mTUgOhmKHmE6aD1zYVleZ+S/iT3LOYhUqYUDXsTpFJHdaAYbBF53Dn7KPCu+GawaQRbFj/3JJLiw+rpZZQ9iZWnZxKnXCdVT34VeK6dMj29Cxj7EHSg4IVBbeEtr5IBnE8qUKrjVe31bBSbKp67NP2GBb3jYqy7Mv94s+10=) 2025-09-18 00:18:47.606920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB6A9cNhOwd1t2TQfGIHZqIx5ost72fu630RJw2XOzaFdKd/QsSRyJd2aRcEZDpZdrZpBnZVGhpbjeO9UTDlHk4=) 2025-09-18 00:18:47.606930 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPa+eVhTivvqd/Lgq69uZRdzlY7e7wWncCtFwXITWsyT) 2025-09-18 00:18:47.606940 | orchestrator | 2025-09-18 00:18:47.606950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.606959 | orchestrator | Thursday 18 September 2025 00:18:45 +0000 (0:00:01.046) 0:00:10.191 **** 2025-09-18 00:18:47.606969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDORSdg7/NLJJe7Xjjyn9KP+FndrGRnUa3LhNknRgYFOhD9Ia7wsp233uCgUWzV+6o1YVQ8syEE1MSR1LruOwtY=) 2025-09-18 00:18:47.606980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXyFxylUGjA0JCznaLs7BJuEeCMVrJkJJ7QgLIH2T03) 2025-09-18 00:18:47.606991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbFPtT+uno/mtmODpaSE+pK9wZhEOZDiVyKoDaIWZ69YRjl3PSzDXPWzrTM/5zRvVLHB4mQuNKjfZtlFrjzgiG/eTvSeUGtip9MxTyfHbiqB3zVE2n9QbE0EkwWm8TvmWGhL1OBVVBqr/JsVdcqKB9uzA+2XT3RZyh86rKsw/c2ymRqpa0vlx3VSkoBIyh+PZG29pWPe2XbwW+3Qi/Js2DG4RJglzSXEzFQw+ODdJawnxyC6w1bwxY1qvL0lmLk9/wPEDKtJXLGkI2CADZ2GClYfxJt6tJloPul1Qkmp6NdzRlEROZ1WiDkJXV1lJITC2d4sEZThBVVUdt7Z3R08gPpLRXiPBQFCbO/Imt49pR5jlzkKDfyXknELqPLxtb+FoRXSScGY7eQdCv5OQLpk8kF/cOQCIk9dTQEFiogwqjc/KNGyx5+pkveTgoVYoarCL+TDed6y24Q545Aje9zIKsk4k6bP0B8zvRMR5dD+zjIgwVdNSXk7T6ECTlAIgssrs=) 2025-09-18 00:18:47.607006 | orchestrator | 2025-09-18 00:18:47.607017 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:47.607027 | orchestrator | Thursday 18 September 2025 00:18:46 +0000 (0:00:01.017) 0:00:11.208 **** 2025-09-18 00:18:47.607042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEugyMzFBQ3EFADm8NehNoXSGJc1pet8x2ffE8om8CjK) 2025-09-18 00:18:58.142865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBgOm+Mfis3l4b+EdSeGGiNqlsqp5yf8/T0DQVSIvDnVehcunsvtVnC94pZK8jVVl9hZikFSDQzCgr7CNG+8fdICmbcfH8M4AS+HpjSVYC0kMcG7CUVa4lc6vc43EKJdrpSfgnLRwM7w6FXLupNdZWIbVdAXYke/gJKCnd4slob9tt5TR4jqxjS+Jwv+OTwhYXsbFGEjIVofkirwQMnj5SjWpg2iAaAZO6cL5yXWcsFZ40E1Tu9vqPMhPY8B/Il3atvQKiLUDI9jZ6/5Uo4vnD/cSbR3r/vED4m5mfNzIvzbPES9n0Zy1LTqlJE2iUbMPleSjY5N/RyqetkctwvJd1tCZacwt1D/9Kbpqr6Jr6G9jx/ZOuCV25OrlK5W31Xz3REwj20kc7WXzcxfL5pUOQcTrI3qhFm3INMuzhoZG3frYVIAt7MQIbTaD1duBBwfPb5e1VosMqG6l009PlDQy4BPqyH5NSglkgXao9GoYJTB7MYBlz9eCaYkxXdBWGAvU=) 2025-09-18 00:18:58.142986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPerRQaa6qx2ALx2pMstTh5JcdmeU0i4wcot2o5NGf0z1IeRkE5UomD0oGzb7Fu3ZuJ6cfgZKFZ7VPRfWToxYsE=) 2025-09-18 00:18:58.143005 | orchestrator | 2025-09-18 00:18:58.143018 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:58.143031 | orchestrator | Thursday 18 September 2025 00:18:47 +0000 (0:00:01.016) 0:00:12.225 **** 2025-09-18 00:18:58.143042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA3E7MN1Lop2KaC9jGIQpaWKeb/pQ/TXzkNW9x5gOobAJrD81QrzR6v0XXV5UN4uK54OUvNGi1YwrmpdPYNJlKw=) 2025-09-18 00:18:58.143054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK7EZGp3NoFEzbGhZHofw6cVkSWATaxzx9Vtip/6j4ut) 2025-09-18 00:18:58.143067 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoRT0olMQ4lmIlqT4mTQZ2FuJ2Z9CfLd0l/5BE4TI2rMP6EcTbAK37dc/9AbvYFynUHFckyPHgaoHHIqlmp2iaeRXJkUqsV/yCO1I8hdpYr9N67/+LNT5LG0J2o/UtJMeswvZfAeNz+xcCQQG9ML8rbov8xx8V7EWr3Y/JcJiasgva0XLOjGm7QxisM68P5Vr2iSZhM+IVdsFBFhKDNdByLUq6hejGP015gZjQUHpRU9AAyj353BhNB86Vi+ED8ZYl2htvcodAUCrbyEgLKtrz8h8bBJqWlekSk88nrFF3Tt84fm5kPajJEA8Y5dd0gwPQ61YRUvw1FKhJJ4/9YpceJfKZcpLCZuS0woa96HQxAfxcUilae3oNi8gBhTxl9waeE58clVjXF97LC9D/XxLfkmFW3VmYnQDaIIaaAQFSLSfH0N2Ex5SM0T74i+ltwqbBsQAC2Vi7J1w/ub3csJQc0HBEKeOO43pzq/dYCTy7yF4rQlzpa95StLd/YYVUcWE=) 2025-09-18 00:18:58.143079 | orchestrator | 2025-09-18 00:18:58.143091 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-18 00:18:58.143103 | orchestrator | Thursday 18 September 2025 00:18:48 +0000 (0:00:01.013) 0:00:13.238 **** 2025-09-18 00:18:58.143115 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-18 00:18:58.143126 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-18 00:18:58.143137 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-18 00:18:58.143148 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-18 00:18:58.143159 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-18 00:18:58.143170 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-18 00:18:58.143181 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-18 00:18:58.143192 | orchestrator | 2025-09-18 00:18:58.143203 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-18 00:18:58.143234 | orchestrator | Thursday 18 September 2025 00:18:53 +0000 (0:00:05.252) 0:00:18.490 **** 2025-09-18 00:18:58.143247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-18 00:18:58.143260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-18 00:18:58.143298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-18 00:18:58.143310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-18 00:18:58.143321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-18 00:18:58.143332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-18 00:18:58.143343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-18 00:18:58.143354 | orchestrator | 2025-09-18 00:18:58.143381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:58.143393 | orchestrator | Thursday 18 September 2025 00:18:54 +0000 (0:00:00.165) 0:00:18.656 **** 2025-09-18 00:18:58.143406 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrrLOna6Nr7t672ZzOAgTmB/VFBGPmRY4+QUrDVU10tECt9DURZJuw3xun/gKn9kwCOUSX7wtzLJoHx9m6VOlgWpziLmWMSqxicSW3KClZmhQzdQK5D9gdnOARP6sjDrPWeqSbyKFcuPftKdjB/FXaeD4n4OoWJ0rSY/mAdt1q3kieoCf/nvZjeWo3iLXPBiYAuMuPBwEaHG2nBz3XJ+wbH+UlVJEYUJU2hoNBMadsF5OmK5W/GlpmSiJXEXPstePELHC14N8DZpbyWDIcdADafdm2VJcT891jMrOB/pAF4US0AP7yvl/pCkxmXTIbVsijI/ZgAM8JJwMdYkxvviB//buomE05X3AGVLOb0va3ltw5uoksV1tZ5eOEQsoJ57qVPO18XqmHn4K4xeFLLtwWSRlsKFddIXO6sYiDvyxweQ0WqT+EMey8Cb0CC40Q5Ezq11s3pnszFNCUcCKB5ziCiNEaLH+dHkdUUYnQ4rcTR53wBg3+dZ4/IJfij7m4Sc=) 2025-09-18 00:18:58.143419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF26HYXLEm7qrEv3YLbpd0Ad5ssFCuTq48KrRwYgC9orV64siDZNYEfHvda3lPmJV2Yq9NyAC5M459qJMKFsuhs=) 2025-09-18 00:18:58.143432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5Ci8tG6cRQd6CW0uFjK+DuxvqAYzjcoPEwhVBTG2gy) 2025-09-18 00:18:58.143445 | orchestrator | 2025-09-18 00:18:58.143457 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:58.143470 | orchestrator | Thursday 18 September 2025 00:18:55 +0000 (0:00:01.017) 0:00:19.673 **** 2025-09-18 00:18:58.143482 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJY7aql51LzeNj27n4IJ7PN3LnLIMzs4P5Lk4SaOwzlTBlAw86KfN6D4FILUcU47GxOI6IeBoBFASP0nEZ9E9QU=) 2025-09-18 00:18:58.143495 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHRpuYO/48g8vw4dxLrkXM/GLqCIrvkbo4os5Xd7otCoey0FQ52rLHpaFF1MUnH73dDMpAbxQTfYog9+mmymYRvt4Ce/0qRRfxbwlWyZ6N3k4sjbdjIen0wArr/tMl1g2W/qm0gVaz/g4c1DVxnGvesUUIGzQ5zuX5zE3evWQmTWKWG9w9x3jZaXHK60K83f6lgX+y7uhDP9yyshdMlc3P39dbSPYNxYZKqoFvdbOS2IQqju1u2b6ktNa7wwimZYTxpgBA8Wgv01QpNJYJ9z0b0TUuoP+VDeDmbdXPu5pn6efnM8LqOwldNhYT6z1/P0Su1HznV70ajpmUz7f/GfDw/CH/yqFYrdGw2a/ooVhLQ1h5pO/hOUApsd96ML5eURTrclTWpqEE10B9P9q8LC6eNESzqAbIDxYXdeD1sw0tgb6vFc00MoQZO35xHTaGE+kaTMyoQVaCjJE2qdY/loqZdUXE7fomqJeDApW+USvs1vstNP3ChZvIyPofOTaXKV0=) 2025-09-18 00:18:58.143508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRdn+PHYzzZsih5j6IIpHpNPIyVb35CfMwWvXquzPnY) 2025-09-18 00:18:58.143520 | orchestrator | 2025-09-18 00:18:58.143532 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:58.143545 | orchestrator | Thursday 18 September 2025 00:18:56 +0000 (0:00:01.031) 0:00:20.705 **** 2025-09-18 00:18:58.143601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY8DWiArYe/9pZMJ1G6jd2PwWgx7a8PY623aOQ69oPgUjScwkCn4stV9gMx6xvqhKpzwKcw2Fl9E9DTe66lvubleclf6hLdr3Oo/jW73haT9BlT64p44SBSfd2J596Wp762sfttzRtvHbS6CNYm4Ide2uyCoK8BZ246bDmBFQ9n+YBGxIMFSFJ0qpI4J26b8yDmZ5RSPlJste51+hVdbc0zk+TperZwkOeBxJQRv2p2IGB+SGEhZsJUAainc9lXyFQUzpiouskDh3bXyBthlxT7NN87JFbQxVLda1OQVIM3Sp42696dQsGV7Jv2iS7K5c/jtYaOfY4P1IzoepqRNC3ZDu/9+4aRJLENKgBXeVTu8gbNgcUIyiuWTv5sO+p8pO4ve1GkTRgCJA1S4TuRQ/DjjSBN422DJgZoeUEn/taqGygPfZbQKBdf6EgLCnb/K4xy7hYlb13hvHf8QoYJf/NG8VeBFUH5iPMlQO0IBmuUc/8lJ5D78BWTKkG2Q27Chk=) 2025-09-18 00:18:58.143614 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEjgtZEVkHFr4VHuAWAZ4Z/U5zrUJxa/3eRHZjSvNSDkkYUD311n0M3RKvf/gSHno4L7T1TlF1F+FagJXX0fnk=) 2025-09-18 00:18:58.143626 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTjUstW2GDC1qGRoMEsufbOJmKkK69Au7Y+GeFlJjkK) 2025-09-18 00:18:58.143639 | orchestrator | 2025-09-18 00:18:58.143651 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:18:58.143664 | orchestrator | Thursday 18 September 2025 00:18:57 +0000 (0:00:01.044) 0:00:21.750 **** 2025-09-18 00:18:58.143693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWLO1+ln2OvQ5I3gNIrxrC5oeTRnWR+Ak2i03DY4FpVpVVYAlGCl5mXNqAtJqEUxNe4kEujJ/S12lOD9FhB/4QwIWONjkas+2KtChyaNM2s3Fny9vqv4gdQ75gkMrO/bac6btvoHPLtb/tK1KyEojuWnkZnrWGpa++bQJx4FjHoj4EQhedOXVcoTgXEchh4JcceSVMmLLfq4vrAbt/yyYXoc/zwszT82eKk1qXw4dn6BouHvHTyMLywyHEOfuCFLRzy3th2MIIM5xhPo8BxSuL7aQtoVuimYDRSlbAaImazTAU9o/raiSLY+XwIrshH36Dyn21+rArgef9/Hpk4QWk1tU6mTUgOhmKHmE6aD1zYVleZ+S/iT3LOYhUqYUDXsTpFJHdaAYbBF53Dn7KPCu+GawaQRbFj/3JJLiw+rpZZQ9iZWnZxKnXCdVT34VeK6dMj29Cxj7EHSg4IVBbeEtr5IBnE8qUKrjVe31bBSbKp67NP2GBb3jYqy7Mv94s+10=) 2025-09-18 00:19:02.184209 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB6A9cNhOwd1t2TQfGIHZqIx5ost72fu630RJw2XOzaFdKd/QsSRyJd2aRcEZDpZdrZpBnZVGhpbjeO9UTDlHk4=) 2025-09-18 00:19:02.184311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPa+eVhTivvqd/Lgq69uZRdzlY7e7wWncCtFwXITWsyT) 2025-09-18 00:19:02.184328 | orchestrator | 2025-09-18 00:19:02.184341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:19:02.184354 | orchestrator | Thursday 18 September 2025 00:18:58 +0000 (0:00:01.011) 0:00:22.761 **** 2025-09-18 00:19:02.184365 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDORSdg7/NLJJe7Xjjyn9KP+FndrGRnUa3LhNknRgYFOhD9Ia7wsp233uCgUWzV+6o1YVQ8syEE1MSR1LruOwtY=) 2025-09-18 00:19:02.184379 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbFPtT+uno/mtmODpaSE+pK9wZhEOZDiVyKoDaIWZ69YRjl3PSzDXPWzrTM/5zRvVLHB4mQuNKjfZtlFrjzgiG/eTvSeUGtip9MxTyfHbiqB3zVE2n9QbE0EkwWm8TvmWGhL1OBVVBqr/JsVdcqKB9uzA+2XT3RZyh86rKsw/c2ymRqpa0vlx3VSkoBIyh+PZG29pWPe2XbwW+3Qi/Js2DG4RJglzSXEzFQw+ODdJawnxyC6w1bwxY1qvL0lmLk9/wPEDKtJXLGkI2CADZ2GClYfxJt6tJloPul1Qkmp6NdzRlEROZ1WiDkJXV1lJITC2d4sEZThBVVUdt7Z3R08gPpLRXiPBQFCbO/Imt49pR5jlzkKDfyXknELqPLxtb+FoRXSScGY7eQdCv5OQLpk8kF/cOQCIk9dTQEFiogwqjc/KNGyx5+pkveTgoVYoarCL+TDed6y24Q545Aje9zIKsk4k6bP0B8zvRMR5dD+zjIgwVdNSXk7T6ECTlAIgssrs=) 2025-09-18 00:19:02.184393 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXyFxylUGjA0JCznaLs7BJuEeCMVrJkJJ7QgLIH2T03) 2025-09-18 00:19:02.184405 | orchestrator | 2025-09-18 00:19:02.184416 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:19:02.184428 | orchestrator | Thursday 18 September 2025 00:18:59 +0000 (0:00:01.010) 0:00:23.772 **** 2025-09-18 00:19:02.184439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBgOm+Mfis3l4b+EdSeGGiNqlsqp5yf8/T0DQVSIvDnVehcunsvtVnC94pZK8jVVl9hZikFSDQzCgr7CNG+8fdICmbcfH8M4AS+HpjSVYC0kMcG7CUVa4lc6vc43EKJdrpSfgnLRwM7w6FXLupNdZWIbVdAXYke/gJKCnd4slob9tt5TR4jqxjS+Jwv+OTwhYXsbFGEjIVofkirwQMnj5SjWpg2iAaAZO6cL5yXWcsFZ40E1Tu9vqPMhPY8B/Il3atvQKiLUDI9jZ6/5Uo4vnD/cSbR3r/vED4m5mfNzIvzbPES9n0Zy1LTqlJE2iUbMPleSjY5N/RyqetkctwvJd1tCZacwt1D/9Kbpqr6Jr6G9jx/ZOuCV25OrlK5W31Xz3REwj20kc7WXzcxfL5pUOQcTrI3qhFm3INMuzhoZG3frYVIAt7MQIbTaD1duBBwfPb5e1VosMqG6l009PlDQy4BPqyH5NSglkgXao9GoYJTB7MYBlz9eCaYkxXdBWGAvU=) 2025-09-18 00:19:02.184477 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEugyMzFBQ3EFADm8NehNoXSGJc1pet8x2ffE8om8CjK) 2025-09-18 00:19:02.184490 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPerRQaa6qx2ALx2pMstTh5JcdmeU0i4wcot2o5NGf0z1IeRkE5UomD0oGzb7Fu3ZuJ6cfgZKFZ7VPRfWToxYsE=) 2025-09-18 00:19:02.184501 | orchestrator | 2025-09-18 00:19:02.184512 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 00:19:02.184523 | orchestrator | Thursday 18 September 2025 00:19:00 +0000 (0:00:01.032) 0:00:24.805 **** 2025-09-18 00:19:02.184534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA3E7MN1Lop2KaC9jGIQpaWKeb/pQ/TXzkNW9x5gOobAJrD81QrzR6v0XXV5UN4uK54OUvNGi1YwrmpdPYNJlKw=) 2025-09-18 00:19:02.184612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoRT0olMQ4lmIlqT4mTQZ2FuJ2Z9CfLd0l/5BE4TI2rMP6EcTbAK37dc/9AbvYFynUHFckyPHgaoHHIqlmp2iaeRXJkUqsV/yCO1I8hdpYr9N67/+LNT5LG0J2o/UtJMeswvZfAeNz+xcCQQG9ML8rbov8xx8V7EWr3Y/JcJiasgva0XLOjGm7QxisM68P5Vr2iSZhM+IVdsFBFhKDNdByLUq6hejGP015gZjQUHpRU9AAyj353BhNB86Vi+ED8ZYl2htvcodAUCrbyEgLKtrz8h8bBJqWlekSk88nrFF3Tt84fm5kPajJEA8Y5dd0gwPQ61YRUvw1FKhJJ4/9YpceJfKZcpLCZuS0woa96HQxAfxcUilae3oNi8gBhTxl9waeE58clVjXF97LC9D/XxLfkmFW3VmYnQDaIIaaAQFSLSfH0N2Ex5SM0T74i+ltwqbBsQAC2Vi7J1w/ub3csJQc0HBEKeOO43pzq/dYCTy7yF4rQlzpa95StLd/YYVUcWE=) 2025-09-18 00:19:02.184628 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK7EZGp3NoFEzbGhZHofw6cVkSWATaxzx9Vtip/6j4ut) 2025-09-18 00:19:02.184640 | orchestrator | 2025-09-18 00:19:02.184651 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-18 00:19:02.184662 | orchestrator | Thursday 18 September 2025 00:19:01 +0000 (0:00:01.009) 0:00:25.814 **** 2025-09-18 00:19:02.184673 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-18 00:19:02.184685 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-18 00:19:02.184714 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-18 00:19:02.184726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-18 00:19:02.184736 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 00:19:02.184749 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-18 00:19:02.184761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-18 00:19:02.184775 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:19:02.184789 | orchestrator | 2025-09-18 00:19:02.184802 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-18 00:19:02.184814 | orchestrator | Thursday 18 September 2025 00:19:01 +0000 (0:00:00.152) 0:00:25.967 **** 2025-09-18 00:19:02.184827 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:19:02.184839 | orchestrator | 2025-09-18 00:19:02.184851 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-18 00:19:02.184864 | orchestrator | Thursday 18 September 2025 00:19:01 +0000 (0:00:00.067) 0:00:26.035 **** 2025-09-18 00:19:02.184877 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:19:02.184889 | orchestrator | 2025-09-18 00:19:02.184900 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-18 00:19:02.184911 | orchestrator | Thursday 18 September 2025 00:19:01 +0000 (0:00:00.051) 0:00:26.086 **** 2025-09-18 00:19:02.184931 | orchestrator | changed: [testbed-manager] 2025-09-18 00:19:02.184942 | orchestrator | 2025-09-18 00:19:02.184952 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:19:02.184964 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:19:02.184976 | orchestrator | 2025-09-18 00:19:02.184987 | orchestrator | 2025-09-18 00:19:02.184998 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:19:02.185009 | orchestrator | Thursday 18 September 2025 00:19:01 +0000 (0:00:00.497) 0:00:26.584 **** 2025-09-18 00:19:02.185020 | orchestrator | =============================================================================== 2025-09-18 00:19:02.185031 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.71s 2025-09-18 00:19:02.185042 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.25s 2025-09-18 00:19:02.185071 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-18 00:19:02.185082 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-18 00:19:02.185094 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-18 00:19:02.185105 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-18 00:19:02.185116 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-18 00:19:02.185126 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-18 00:19:02.185137 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-18 00:19:02.185148 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-18 00:19:02.185159 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-18 00:19:02.185170 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-18 00:19:02.185181 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-18 00:19:02.185192 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-18 00:19:02.185203 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-18 00:19:02.185214 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-18 00:19:02.185225 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-09-18 00:19:02.185236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-18 00:19:02.185248 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-18 00:19:02.185264 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-18 00:19:02.445583 | orchestrator | + osism apply squid 2025-09-18 00:19:14.418245 | orchestrator | 2025-09-18 00:19:14 | INFO  | Task a41f554a-641b-416b-92ce-8bb1ab9e3d0a (squid) was prepared for execution. 2025-09-18 00:19:14.418357 | orchestrator | 2025-09-18 00:19:14 | INFO  | It takes a moment until task a41f554a-641b-416b-92ce-8bb1ab9e3d0a (squid) has been started and output is visible here. 2025-09-18 00:21:07.897857 | orchestrator | 2025-09-18 00:21:07.897973 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-18 00:21:07.897988 | orchestrator | 2025-09-18 00:21:07.897998 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-18 00:21:07.898009 | orchestrator | Thursday 18 September 2025 00:19:18 +0000 (0:00:00.158) 0:00:00.158 **** 2025-09-18 00:21:07.898076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:21:07.898096 | orchestrator | 2025-09-18 00:21:07.898114 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-18 00:21:07.898160 | orchestrator | Thursday 18 September 2025 00:19:18 +0000 (0:00:00.081) 0:00:00.240 **** 2025-09-18 00:21:07.898172 | orchestrator | ok: [testbed-manager] 2025-09-18 00:21:07.898183 | orchestrator | 2025-09-18 00:21:07.898193 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-18 00:21:07.898203 | orchestrator | Thursday 18 September 2025 00:19:19 +0000 (0:00:01.342) 0:00:01.582 **** 2025-09-18 00:21:07.898213 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-18 00:21:07.898223 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-18 00:21:07.898232 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-18 00:21:07.898242 | orchestrator | 2025-09-18 00:21:07.898252 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-18 00:21:07.898261 | orchestrator | Thursday 18 September 2025 00:19:20 +0000 (0:00:01.125) 0:00:02.708 **** 2025-09-18 00:21:07.898271 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-18 00:21:07.898280 | orchestrator | 2025-09-18 00:21:07.898290 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-18 00:21:07.898300 | orchestrator | Thursday 18 September 2025 00:19:21 +0000 (0:00:01.045) 0:00:03.753 **** 2025-09-18 00:21:07.898309 | orchestrator | ok: [testbed-manager] 2025-09-18 00:21:07.898318 | orchestrator | 2025-09-18 00:21:07.898328 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-18 00:21:07.898337 | orchestrator | Thursday 18 September 2025 00:19:22 +0000 (0:00:00.402) 0:00:04.155 **** 2025-09-18 00:21:07.898347 | orchestrator | changed: [testbed-manager] 2025-09-18 00:21:07.898357 | orchestrator | 2025-09-18 00:21:07.898366 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-18 00:21:07.898376 | orchestrator | Thursday 18 September 2025 00:19:23 +0000 (0:00:00.870) 0:00:05.026 **** 2025-09-18 00:21:07.898385 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-18 00:21:07.898395 | orchestrator | ok: [testbed-manager] 2025-09-18 00:21:07.898405 | orchestrator | 2025-09-18 00:21:07.898414 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-18 00:21:07.898424 | orchestrator | Thursday 18 September 2025 00:19:54 +0000 (0:00:31.682) 0:00:36.708 **** 2025-09-18 00:21:07.898433 | orchestrator | changed: [testbed-manager] 2025-09-18 00:21:07.898443 | orchestrator | 2025-09-18 00:21:07.898452 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-18 00:21:07.898501 | orchestrator | Thursday 18 September 2025 00:20:06 +0000 (0:00:12.093) 0:00:48.802 **** 2025-09-18 00:21:07.898512 | orchestrator | Pausing for 60 seconds 2025-09-18 00:21:07.898522 | orchestrator | changed: [testbed-manager] 2025-09-18 00:21:07.898532 | orchestrator | 2025-09-18 00:21:07.898542 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-18 00:21:07.898551 | orchestrator | Thursday 18 September 2025 00:21:06 +0000 (0:01:00.071) 0:01:48.873 **** 2025-09-18 00:21:07.898561 | orchestrator | ok: [testbed-manager] 2025-09-18 00:21:07.898570 | orchestrator | 2025-09-18 00:21:07.898580 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-18 00:21:07.898589 | orchestrator | Thursday 18 September 2025 00:21:07 +0000 (0:00:00.070) 0:01:48.944 **** 2025-09-18 00:21:07.898599 | orchestrator | changed: [testbed-manager] 2025-09-18 00:21:07.898609 | orchestrator | 2025-09-18 00:21:07.898618 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:21:07.898628 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:21:07.898638 | orchestrator | 2025-09-18 00:21:07.898647 | orchestrator | 2025-09-18 00:21:07.898657 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:21:07.898666 | orchestrator | Thursday 18 September 2025 00:21:07 +0000 (0:00:00.642) 0:01:49.586 **** 2025-09-18 00:21:07.898685 | orchestrator | =============================================================================== 2025-09-18 00:21:07.898694 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-18 00:21:07.898704 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.68s 2025-09-18 00:21:07.898713 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.09s 2025-09-18 00:21:07.898723 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.34s 2025-09-18 00:21:07.898732 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-09-18 00:21:07.898742 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-09-18 00:21:07.898752 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2025-09-18 00:21:07.898761 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-09-18 00:21:07.898771 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.40s 2025-09-18 00:21:07.898780 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-18 00:21:07.898790 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-18 00:21:08.160770 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 00:21:08.161181 | orchestrator | ++ semver latest 9.0.0 2025-09-18 00:21:08.215632 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-18 00:21:08.215710 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 00:21:08.217700 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-18 00:21:20.187114 | orchestrator | 2025-09-18 00:21:20 | INFO  | Task 1f57bee4-6bf4-406d-9a93-1b67670e86cb (operator) was prepared for execution. 2025-09-18 00:21:20.187228 | orchestrator | 2025-09-18 00:21:20 | INFO  | It takes a moment until task 1f57bee4-6bf4-406d-9a93-1b67670e86cb (operator) has been started and output is visible here. 2025-09-18 00:21:35.631881 | orchestrator | 2025-09-18 00:21:35.631999 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-18 00:21:35.632016 | orchestrator | 2025-09-18 00:21:35.632028 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 00:21:35.632039 | orchestrator | Thursday 18 September 2025 00:21:23 +0000 (0:00:00.108) 0:00:00.108 **** 2025-09-18 00:21:35.632070 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:21:35.632083 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:21:35.632094 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:21:35.632105 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:21:35.632116 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:21:35.632126 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:21:35.632137 | orchestrator | 2025-09-18 00:21:35.632148 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-18 00:21:35.632159 | orchestrator | Thursday 18 September 2025 00:21:27 +0000 (0:00:03.432) 0:00:03.540 **** 2025-09-18 00:21:35.632170 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:21:35.632181 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:21:35.632192 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:21:35.632203 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:21:35.632214 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:21:35.632224 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:21:35.632235 | orchestrator | 2025-09-18 00:21:35.632246 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-18 00:21:35.632257 | orchestrator | 2025-09-18 00:21:35.632268 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-18 00:21:35.632279 | orchestrator | Thursday 18 September 2025 00:21:27 +0000 (0:00:00.774) 0:00:04.315 **** 2025-09-18 00:21:35.632289 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:21:35.632300 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:21:35.632311 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:21:35.632321 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:21:35.632332 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:21:35.632342 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:21:35.632377 | orchestrator | 2025-09-18 00:21:35.632389 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-18 00:21:35.632400 | orchestrator | Thursday 18 September 2025 00:21:27 +0000 (0:00:00.150) 0:00:04.465 **** 2025-09-18 00:21:35.632410 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:21:35.632424 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:21:35.632462 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:21:35.632475 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:21:35.632487 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:21:35.632499 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:21:35.632511 | orchestrator | 2025-09-18 00:21:35.632523 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-18 00:21:35.632536 | orchestrator | Thursday 18 September 2025 00:21:28 +0000 (0:00:00.111) 0:00:04.576 **** 2025-09-18 00:21:35.632548 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:35.632561 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:35.632574 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:35.632587 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:35.632599 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:35.632612 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:35.632624 | orchestrator | 2025-09-18 00:21:35.632636 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-18 00:21:35.632649 | orchestrator | Thursday 18 September 2025 00:21:28 +0000 (0:00:00.702) 0:00:05.279 **** 2025-09-18 00:21:35.632662 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:35.632674 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:35.632686 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:35.632699 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:35.632711 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:35.632724 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:35.632736 | orchestrator | 2025-09-18 00:21:35.632749 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-18 00:21:35.632761 | orchestrator | Thursday 18 September 2025 00:21:29 +0000 (0:00:00.790) 0:00:06.069 **** 2025-09-18 00:21:35.632773 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-18 00:21:35.632784 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-18 00:21:35.632795 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-18 00:21:35.632805 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-18 00:21:35.632816 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-18 00:21:35.632827 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-18 00:21:35.632837 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-18 00:21:35.632848 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-18 00:21:35.632859 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-18 00:21:35.632869 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-18 00:21:35.632880 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-18 00:21:35.632891 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-18 00:21:35.632902 | orchestrator | 2025-09-18 00:21:35.632913 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-18 00:21:35.632923 | orchestrator | Thursday 18 September 2025 00:21:30 +0000 (0:00:01.181) 0:00:07.251 **** 2025-09-18 00:21:35.632940 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:35.632951 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:35.632962 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:35.632973 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:35.632984 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:35.632994 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:35.633005 | orchestrator | 2025-09-18 00:21:35.633016 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-18 00:21:35.633028 | orchestrator | Thursday 18 September 2025 00:21:31 +0000 (0:00:01.254) 0:00:08.505 **** 2025-09-18 00:21:35.633039 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-18 00:21:35.633059 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-18 00:21:35.633070 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-18 00:21:35.633081 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633111 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633122 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633133 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633144 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633154 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 00:21:35.633165 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633175 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633186 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633196 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633207 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633218 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-18 00:21:35.633228 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633239 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633250 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633260 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633271 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633281 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-18 00:21:35.633292 | orchestrator | 2025-09-18 00:21:35.633303 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-18 00:21:35.633314 | orchestrator | Thursday 18 September 2025 00:21:33 +0000 (0:00:01.360) 0:00:09.865 **** 2025-09-18 00:21:35.633325 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:35.633336 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:35.633346 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:35.633357 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:35.633367 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:35.633378 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:35.633388 | orchestrator | 2025-09-18 00:21:35.633399 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-18 00:21:35.633410 | orchestrator | Thursday 18 September 2025 00:21:33 +0000 (0:00:00.178) 0:00:10.044 **** 2025-09-18 00:21:35.633420 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:35.633431 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:35.633478 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:35.633489 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:35.633500 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:35.633511 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:35.633522 | orchestrator | 2025-09-18 00:21:35.633533 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-18 00:21:35.633544 | orchestrator | Thursday 18 September 2025 00:21:34 +0000 (0:00:00.629) 0:00:10.673 **** 2025-09-18 00:21:35.633555 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:35.633566 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:35.633577 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:35.633587 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:35.633598 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:35.633609 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:35.633619 | orchestrator | 2025-09-18 00:21:35.633630 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-18 00:21:35.633648 | orchestrator | Thursday 18 September 2025 00:21:34 +0000 (0:00:00.251) 0:00:10.925 **** 2025-09-18 00:21:35.633660 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 00:21:35.633674 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:35.633686 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:21:35.633696 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:35.633707 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 00:21:35.633718 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:35.633728 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-18 00:21:35.633739 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:35.633750 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 00:21:35.633761 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:35.633772 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-18 00:21:35.633783 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:35.633793 | orchestrator | 2025-09-18 00:21:35.633804 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-18 00:21:35.633815 | orchestrator | Thursday 18 September 2025 00:21:35 +0000 (0:00:00.759) 0:00:11.684 **** 2025-09-18 00:21:35.633826 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:35.633836 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:35.633847 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:35.633858 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:35.633868 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:35.633879 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:35.633890 | orchestrator | 2025-09-18 00:21:35.633901 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-18 00:21:35.633918 | orchestrator | Thursday 18 September 2025 00:21:35 +0000 (0:00:00.155) 0:00:11.839 **** 2025-09-18 00:21:35.633929 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:35.633940 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:35.633951 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:35.633962 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:35.633972 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:35.633983 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:35.633994 | orchestrator | 2025-09-18 00:21:35.634005 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-18 00:21:35.634083 | orchestrator | Thursday 18 September 2025 00:21:35 +0000 (0:00:00.168) 0:00:12.008 **** 2025-09-18 00:21:35.634099 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:35.634110 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:35.634121 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:35.634131 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:35.634151 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:36.754306 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:36.754406 | orchestrator | 2025-09-18 00:21:36.754423 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-18 00:21:36.754467 | orchestrator | Thursday 18 September 2025 00:21:35 +0000 (0:00:00.151) 0:00:12.160 **** 2025-09-18 00:21:36.754479 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:21:36.754490 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:21:36.754501 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:21:36.754512 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:21:36.754523 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:21:36.754534 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:21:36.754545 | orchestrator | 2025-09-18 00:21:36.754556 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-18 00:21:36.754567 | orchestrator | Thursday 18 September 2025 00:21:36 +0000 (0:00:00.671) 0:00:12.832 **** 2025-09-18 00:21:36.754578 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:21:36.754589 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:21:36.754599 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:21:36.754635 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:21:36.754646 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:21:36.754657 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:21:36.754667 | orchestrator | 2025-09-18 00:21:36.754678 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:21:36.754690 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754702 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754713 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754724 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754735 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754745 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:21:36.754756 | orchestrator | 2025-09-18 00:21:36.754767 | orchestrator | 2025-09-18 00:21:36.754778 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:21:36.754788 | orchestrator | Thursday 18 September 2025 00:21:36 +0000 (0:00:00.223) 0:00:13.055 **** 2025-09-18 00:21:36.754799 | orchestrator | =============================================================================== 2025-09-18 00:21:36.754810 | orchestrator | Gathering Facts --------------------------------------------------------- 3.43s 2025-09-18 00:21:36.754821 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.36s 2025-09-18 00:21:36.754833 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-09-18 00:21:36.754843 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2025-09-18 00:21:36.754855 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-09-18 00:21:36.754867 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-09-18 00:21:36.754880 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2025-09-18 00:21:36.754892 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.70s 2025-09-18 00:21:36.754904 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-09-18 00:21:36.754916 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2025-09-18 00:21:36.754929 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.25s 2025-09-18 00:21:36.754942 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-18 00:21:36.754954 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-18 00:21:36.754966 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-09-18 00:21:36.754994 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-18 00:21:36.755007 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-18 00:21:36.755019 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-09-18 00:21:36.755031 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.11s 2025-09-18 00:21:37.019839 | orchestrator | + osism apply --environment custom facts 2025-09-18 00:21:39.036370 | orchestrator | 2025-09-18 00:21:39 | INFO  | Trying to run play facts in environment custom 2025-09-18 00:21:49.119210 | orchestrator | 2025-09-18 00:21:49 | INFO  | Task 6ac7c4db-7922-4fb3-878a-9e189c4efe7a (facts) was prepared for execution. 2025-09-18 00:21:49.119327 | orchestrator | 2025-09-18 00:21:49 | INFO  | It takes a moment until task 6ac7c4db-7922-4fb3-878a-9e189c4efe7a (facts) has been started and output is visible here. 2025-09-18 00:22:34.236765 | orchestrator | 2025-09-18 00:22:34.236868 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-18 00:22:34.236880 | orchestrator | 2025-09-18 00:22:34.236889 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 00:22:34.236898 | orchestrator | Thursday 18 September 2025 00:21:52 +0000 (0:00:00.063) 0:00:00.063 **** 2025-09-18 00:22:34.236907 | orchestrator | ok: [testbed-manager] 2025-09-18 00:22:34.236916 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:22:34.236925 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:22:34.236934 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.236942 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.236950 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.236958 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:22:34.236966 | orchestrator | 2025-09-18 00:22:34.236975 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-18 00:22:34.236983 | orchestrator | Thursday 18 September 2025 00:21:53 +0000 (0:00:01.271) 0:00:01.335 **** 2025-09-18 00:22:34.236991 | orchestrator | ok: [testbed-manager] 2025-09-18 00:22:34.236999 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:22:34.237008 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.237016 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.237024 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:22:34.237032 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:22:34.237040 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.237048 | orchestrator | 2025-09-18 00:22:34.237056 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-18 00:22:34.237065 | orchestrator | 2025-09-18 00:22:34.237073 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 00:22:34.237081 | orchestrator | Thursday 18 September 2025 00:21:54 +0000 (0:00:01.130) 0:00:02.466 **** 2025-09-18 00:22:34.237090 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237098 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237106 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237114 | orchestrator | 2025-09-18 00:22:34.237122 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 00:22:34.237131 | orchestrator | Thursday 18 September 2025 00:21:55 +0000 (0:00:00.101) 0:00:02.567 **** 2025-09-18 00:22:34.237139 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237148 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237156 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237164 | orchestrator | 2025-09-18 00:22:34.237172 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 00:22:34.237180 | orchestrator | Thursday 18 September 2025 00:21:55 +0000 (0:00:00.185) 0:00:02.753 **** 2025-09-18 00:22:34.237189 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237197 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237205 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237214 | orchestrator | 2025-09-18 00:22:34.237222 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 00:22:34.237230 | orchestrator | Thursday 18 September 2025 00:21:55 +0000 (0:00:00.171) 0:00:02.925 **** 2025-09-18 00:22:34.237239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:22:34.237249 | orchestrator | 2025-09-18 00:22:34.237257 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 00:22:34.237265 | orchestrator | Thursday 18 September 2025 00:21:55 +0000 (0:00:00.108) 0:00:03.033 **** 2025-09-18 00:22:34.237292 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237301 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237309 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237317 | orchestrator | 2025-09-18 00:22:34.237328 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 00:22:34.237337 | orchestrator | Thursday 18 September 2025 00:21:55 +0000 (0:00:00.397) 0:00:03.430 **** 2025-09-18 00:22:34.237347 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:22:34.237356 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:22:34.237365 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:22:34.237374 | orchestrator | 2025-09-18 00:22:34.237383 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 00:22:34.237419 | orchestrator | Thursday 18 September 2025 00:21:56 +0000 (0:00:00.110) 0:00:03.541 **** 2025-09-18 00:22:34.237428 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.237437 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.237446 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.237456 | orchestrator | 2025-09-18 00:22:34.237465 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 00:22:34.237474 | orchestrator | Thursday 18 September 2025 00:21:57 +0000 (0:00:00.943) 0:00:04.485 **** 2025-09-18 00:22:34.237484 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237493 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237503 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237512 | orchestrator | 2025-09-18 00:22:34.237521 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 00:22:34.237532 | orchestrator | Thursday 18 September 2025 00:21:57 +0000 (0:00:00.421) 0:00:04.906 **** 2025-09-18 00:22:34.237540 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.237549 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.237557 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.237565 | orchestrator | 2025-09-18 00:22:34.237573 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 00:22:34.237581 | orchestrator | Thursday 18 September 2025 00:21:58 +0000 (0:00:01.061) 0:00:05.967 **** 2025-09-18 00:22:34.237626 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.237635 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.237644 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.237651 | orchestrator | 2025-09-18 00:22:34.237660 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-18 00:22:34.237668 | orchestrator | Thursday 18 September 2025 00:22:16 +0000 (0:00:17.870) 0:00:23.838 **** 2025-09-18 00:22:34.237676 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:22:34.237684 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:22:34.237692 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:22:34.237700 | orchestrator | 2025-09-18 00:22:34.237708 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-18 00:22:34.237730 | orchestrator | Thursday 18 September 2025 00:22:16 +0000 (0:00:00.097) 0:00:23.936 **** 2025-09-18 00:22:34.237739 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:22:34.237747 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:22:34.237755 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:22:34.237763 | orchestrator | 2025-09-18 00:22:34.237772 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 00:22:34.237780 | orchestrator | Thursday 18 September 2025 00:22:24 +0000 (0:00:08.234) 0:00:32.170 **** 2025-09-18 00:22:34.237788 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237796 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237804 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237812 | orchestrator | 2025-09-18 00:22:34.237820 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-18 00:22:34.237828 | orchestrator | Thursday 18 September 2025 00:22:25 +0000 (0:00:00.445) 0:00:32.616 **** 2025-09-18 00:22:34.237836 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-18 00:22:34.237844 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-18 00:22:34.237857 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-18 00:22:34.237866 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-18 00:22:34.237874 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-18 00:22:34.237882 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-18 00:22:34.237890 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-18 00:22:34.237898 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-18 00:22:34.237906 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-18 00:22:34.237914 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-18 00:22:34.237922 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-18 00:22:34.237930 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-18 00:22:34.237938 | orchestrator | 2025-09-18 00:22:34.237946 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 00:22:34.237954 | orchestrator | Thursday 18 September 2025 00:22:28 +0000 (0:00:03.743) 0:00:36.360 **** 2025-09-18 00:22:34.237962 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.237970 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.237982 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.237994 | orchestrator | 2025-09-18 00:22:34.238006 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 00:22:34.238066 | orchestrator | 2025-09-18 00:22:34.238076 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:22:34.238084 | orchestrator | Thursday 18 September 2025 00:22:30 +0000 (0:00:01.439) 0:00:37.799 **** 2025-09-18 00:22:34.238092 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:22:34.238100 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:22:34.238108 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:22:34.238115 | orchestrator | ok: [testbed-manager] 2025-09-18 00:22:34.238123 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:22:34.238131 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:22:34.238138 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:22:34.238146 | orchestrator | 2025-09-18 00:22:34.238154 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:22:34.238162 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:22:34.238171 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:22:34.238181 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:22:34.238188 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:22:34.238197 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:22:34.238205 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:22:34.238217 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:22:34.238225 | orchestrator | 2025-09-18 00:22:34.238233 | orchestrator | 2025-09-18 00:22:34.238241 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:22:34.238249 | orchestrator | Thursday 18 September 2025 00:22:34 +0000 (0:00:03.882) 0:00:41.682 **** 2025-09-18 00:22:34.238257 | orchestrator | =============================================================================== 2025-09-18 00:22:34.238271 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.87s 2025-09-18 00:22:34.238279 | orchestrator | Install required packages (Debian) -------------------------------------- 8.23s 2025-09-18 00:22:34.238287 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2025-09-18 00:22:34.238294 | orchestrator | Copy fact files --------------------------------------------------------- 3.74s 2025-09-18 00:22:34.238302 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.44s 2025-09-18 00:22:34.238310 | orchestrator | Create custom facts directory ------------------------------------------- 1.27s 2025-09-18 00:22:34.238324 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2025-09-18 00:22:34.442455 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-09-18 00:22:34.442552 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.94s 2025-09-18 00:22:34.442566 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-09-18 00:22:34.442577 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2025-09-18 00:22:34.442588 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-09-18 00:22:34.442600 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-18 00:22:34.442610 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-18 00:22:34.442621 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-18 00:22:34.442632 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-09-18 00:22:34.442644 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-18 00:22:34.442654 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-18 00:22:34.731762 | orchestrator | + osism apply bootstrap 2025-09-18 00:22:46.722678 | orchestrator | 2025-09-18 00:22:46 | INFO  | Task a7455cb0-054b-4f1c-8c49-003a2450b5bf (bootstrap) was prepared for execution. 2025-09-18 00:22:46.722794 | orchestrator | 2025-09-18 00:22:46 | INFO  | It takes a moment until task a7455cb0-054b-4f1c-8c49-003a2450b5bf (bootstrap) has been started and output is visible here. 2025-09-18 00:23:01.685486 | orchestrator | 2025-09-18 00:23:01.685601 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-18 00:23:01.685617 | orchestrator | 2025-09-18 00:23:01.685628 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-18 00:23:01.685640 | orchestrator | Thursday 18 September 2025 00:22:50 +0000 (0:00:00.119) 0:00:00.119 **** 2025-09-18 00:23:01.685651 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:01.685663 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:01.685674 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:01.685684 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:01.685695 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:01.685706 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:01.685716 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:01.685727 | orchestrator | 2025-09-18 00:23:01.685738 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 00:23:01.685749 | orchestrator | 2025-09-18 00:23:01.685760 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:23:01.685771 | orchestrator | Thursday 18 September 2025 00:22:50 +0000 (0:00:00.180) 0:00:00.300 **** 2025-09-18 00:23:01.685782 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:01.685792 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:01.685803 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:01.685814 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:01.685825 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:01.685835 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:01.685846 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:01.685880 | orchestrator | 2025-09-18 00:23:01.685892 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-18 00:23:01.685902 | orchestrator | 2025-09-18 00:23:01.685913 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:23:01.685924 | orchestrator | Thursday 18 September 2025 00:22:54 +0000 (0:00:03.633) 0:00:03.934 **** 2025-09-18 00:23:01.685935 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-18 00:23:01.685946 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-18 00:23:01.685956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-18 00:23:01.685967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-18 00:23:01.685981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:23:01.685994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-18 00:23:01.686006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:23:01.686077 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-18 00:23:01.686092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:23:01.686105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-18 00:23:01.686118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-18 00:23:01.686131 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 00:23:01.686143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 00:23:01.686172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 00:23:01.686195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-18 00:23:01.686209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-18 00:23:01.686221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-18 00:23:01.686233 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-18 00:23:01.686246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-18 00:23:01.686258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 00:23:01.686270 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:01.686283 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-18 00:23:01.686295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-18 00:23:01.686308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-18 00:23:01.686320 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:01.686331 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-18 00:23:01.686342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 00:23:01.686352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-18 00:23:01.686363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-18 00:23:01.686374 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-18 00:23:01.686407 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 00:23:01.686418 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 00:23:01.686429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-18 00:23:01.686439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 00:23:01.686450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:23:01.686479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-18 00:23:01.686490 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 00:23:01.686500 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:01.686511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-18 00:23:01.686522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:23:01.686532 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-18 00:23:01.686555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 00:23:01.686566 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-18 00:23:01.686578 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:01.686589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:23:01.686600 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:01.686610 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-18 00:23:01.686640 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-18 00:23:01.686652 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-18 00:23:01.686662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-18 00:23:01.686673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-18 00:23:01.686684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-18 00:23:01.686694 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-18 00:23:01.686705 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:01.686716 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-18 00:23:01.686726 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:01.686737 | orchestrator | 2025-09-18 00:23:01.686748 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-18 00:23:01.686759 | orchestrator | 2025-09-18 00:23:01.686770 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-18 00:23:01.686781 | orchestrator | Thursday 18 September 2025 00:22:54 +0000 (0:00:00.378) 0:00:04.312 **** 2025-09-18 00:23:01.686792 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:01.686802 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:01.686813 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:01.686824 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:01.686835 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:01.686845 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:01.686856 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:01.686867 | orchestrator | 2025-09-18 00:23:01.686877 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-18 00:23:01.686888 | orchestrator | Thursday 18 September 2025 00:22:55 +0000 (0:00:01.129) 0:00:05.441 **** 2025-09-18 00:23:01.686899 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:01.686910 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:01.686920 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:01.686931 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:01.686942 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:01.686952 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:01.686963 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:01.686974 | orchestrator | 2025-09-18 00:23:01.686984 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-18 00:23:01.686995 | orchestrator | Thursday 18 September 2025 00:22:57 +0000 (0:00:01.264) 0:00:06.706 **** 2025-09-18 00:23:01.687007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:01.687021 | orchestrator | 2025-09-18 00:23:01.687032 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-18 00:23:01.687042 | orchestrator | Thursday 18 September 2025 00:22:57 +0000 (0:00:00.262) 0:00:06.968 **** 2025-09-18 00:23:01.687053 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:01.687064 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:01.687080 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:01.687091 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:01.687102 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:01.687113 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:01.687124 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:01.687134 | orchestrator | 2025-09-18 00:23:01.687152 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-18 00:23:01.687163 | orchestrator | Thursday 18 September 2025 00:22:59 +0000 (0:00:01.971) 0:00:08.939 **** 2025-09-18 00:23:01.687174 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:01.687186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:01.687199 | orchestrator | 2025-09-18 00:23:01.687210 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-18 00:23:01.687220 | orchestrator | Thursday 18 September 2025 00:22:59 +0000 (0:00:00.251) 0:00:09.191 **** 2025-09-18 00:23:01.687231 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:01.687242 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:01.687253 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:01.687263 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:01.687274 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:01.687284 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:01.687295 | orchestrator | 2025-09-18 00:23:01.687306 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-18 00:23:01.687317 | orchestrator | Thursday 18 September 2025 00:23:00 +0000 (0:00:01.074) 0:00:10.265 **** 2025-09-18 00:23:01.687328 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:01.687338 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:01.687349 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:01.687360 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:01.687370 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:01.687400 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:01.687411 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:01.687422 | orchestrator | 2025-09-18 00:23:01.687433 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-18 00:23:01.687444 | orchestrator | Thursday 18 September 2025 00:23:01 +0000 (0:00:00.565) 0:00:10.830 **** 2025-09-18 00:23:01.687455 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:01.687465 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:01.687476 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:01.687486 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:01.687497 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:01.687508 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:01.687519 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:01.687529 | orchestrator | 2025-09-18 00:23:01.687540 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-18 00:23:01.687552 | orchestrator | Thursday 18 September 2025 00:23:01 +0000 (0:00:00.406) 0:00:11.237 **** 2025-09-18 00:23:01.687563 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:01.687574 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:01.687591 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:13.351834 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:13.352004 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:13.352020 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:13.352032 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:13.352045 | orchestrator | 2025-09-18 00:23:13.352058 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-18 00:23:13.352071 | orchestrator | Thursday 18 September 2025 00:23:01 +0000 (0:00:00.191) 0:00:11.428 **** 2025-09-18 00:23:13.352085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:13.352118 | orchestrator | 2025-09-18 00:23:13.352130 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-18 00:23:13.352143 | orchestrator | Thursday 18 September 2025 00:23:02 +0000 (0:00:00.272) 0:00:11.701 **** 2025-09-18 00:23:13.352197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:13.352217 | orchestrator | 2025-09-18 00:23:13.352235 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-18 00:23:13.352252 | orchestrator | Thursday 18 September 2025 00:23:02 +0000 (0:00:00.290) 0:00:11.991 **** 2025-09-18 00:23:13.352270 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.352291 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.352308 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.352328 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.352343 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.352355 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.352368 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.352415 | orchestrator | 2025-09-18 00:23:13.352428 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-18 00:23:13.352441 | orchestrator | Thursday 18 September 2025 00:23:03 +0000 (0:00:01.409) 0:00:13.400 **** 2025-09-18 00:23:13.352452 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:13.352463 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:13.352473 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:13.352484 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:13.352495 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:13.352506 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:13.352516 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:13.352527 | orchestrator | 2025-09-18 00:23:13.352538 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-18 00:23:13.352549 | orchestrator | Thursday 18 September 2025 00:23:03 +0000 (0:00:00.222) 0:00:13.623 **** 2025-09-18 00:23:13.352559 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.352570 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.352581 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.352592 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.352602 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.352613 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.352624 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.352634 | orchestrator | 2025-09-18 00:23:13.352645 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-18 00:23:13.352656 | orchestrator | Thursday 18 September 2025 00:23:04 +0000 (0:00:00.555) 0:00:14.178 **** 2025-09-18 00:23:13.352667 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:13.352677 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:13.352688 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:13.352699 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:13.352710 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:13.352720 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:13.352731 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:13.352741 | orchestrator | 2025-09-18 00:23:13.352752 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-18 00:23:13.352765 | orchestrator | Thursday 18 September 2025 00:23:04 +0000 (0:00:00.252) 0:00:14.431 **** 2025-09-18 00:23:13.352775 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:13.352786 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.352796 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:13.352812 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:13.352832 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:13.352851 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:13.352871 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:13.352889 | orchestrator | 2025-09-18 00:23:13.352900 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-18 00:23:13.352911 | orchestrator | Thursday 18 September 2025 00:23:05 +0000 (0:00:00.522) 0:00:14.954 **** 2025-09-18 00:23:13.352922 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.352942 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:13.352953 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:13.352964 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:13.352974 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:13.352985 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:13.352995 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:13.353005 | orchestrator | 2025-09-18 00:23:13.353016 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-18 00:23:13.353027 | orchestrator | Thursday 18 September 2025 00:23:06 +0000 (0:00:01.035) 0:00:15.989 **** 2025-09-18 00:23:13.353037 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.353048 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.353058 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.353069 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.353080 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.353091 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.353102 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.353112 | orchestrator | 2025-09-18 00:23:13.353123 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-18 00:23:13.353134 | orchestrator | Thursday 18 September 2025 00:23:07 +0000 (0:00:01.079) 0:00:17.068 **** 2025-09-18 00:23:13.353168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:13.353180 | orchestrator | 2025-09-18 00:23:13.353191 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-18 00:23:13.353202 | orchestrator | Thursday 18 September 2025 00:23:07 +0000 (0:00:00.376) 0:00:17.445 **** 2025-09-18 00:23:13.353212 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:13.353223 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:13.353234 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:13.353245 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:13.353255 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:13.353266 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:13.353277 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:13.353287 | orchestrator | 2025-09-18 00:23:13.353298 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 00:23:13.353309 | orchestrator | Thursday 18 September 2025 00:23:08 +0000 (0:00:01.131) 0:00:18.577 **** 2025-09-18 00:23:13.353320 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.353330 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.353341 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.353351 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.353362 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.353399 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.353412 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.353422 | orchestrator | 2025-09-18 00:23:13.353433 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 00:23:13.353444 | orchestrator | Thursday 18 September 2025 00:23:09 +0000 (0:00:00.244) 0:00:18.822 **** 2025-09-18 00:23:13.353454 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.353465 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.353475 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.353486 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.353497 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.353507 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.353517 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.353528 | orchestrator | 2025-09-18 00:23:13.353538 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 00:23:13.353549 | orchestrator | Thursday 18 September 2025 00:23:09 +0000 (0:00:00.227) 0:00:19.049 **** 2025-09-18 00:23:13.353612 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.353624 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.353643 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.353654 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.353664 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.353675 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.353685 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.353696 | orchestrator | 2025-09-18 00:23:13.353707 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 00:23:13.353717 | orchestrator | Thursday 18 September 2025 00:23:09 +0000 (0:00:00.204) 0:00:19.254 **** 2025-09-18 00:23:13.353735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:13.353748 | orchestrator | 2025-09-18 00:23:13.353759 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 00:23:13.353770 | orchestrator | Thursday 18 September 2025 00:23:09 +0000 (0:00:00.270) 0:00:19.524 **** 2025-09-18 00:23:13.353780 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.353791 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.353801 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.353812 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.353822 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.353833 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.353843 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.353861 | orchestrator | 2025-09-18 00:23:13.353880 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 00:23:13.353899 | orchestrator | Thursday 18 September 2025 00:23:10 +0000 (0:00:00.569) 0:00:20.094 **** 2025-09-18 00:23:13.353919 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:13.353939 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:13.353956 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:13.353971 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:13.353982 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:13.353993 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:13.354003 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:13.354014 | orchestrator | 2025-09-18 00:23:13.354096 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 00:23:13.354107 | orchestrator | Thursday 18 September 2025 00:23:10 +0000 (0:00:00.250) 0:00:20.344 **** 2025-09-18 00:23:13.354117 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.354128 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.354139 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.354149 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.354160 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:13.354170 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:13.354181 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:13.354191 | orchestrator | 2025-09-18 00:23:13.354202 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 00:23:13.354212 | orchestrator | Thursday 18 September 2025 00:23:11 +0000 (0:00:00.985) 0:00:21.330 **** 2025-09-18 00:23:13.354223 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.354234 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.354244 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.354255 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:13.354265 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:13.354276 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.354286 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:13.354297 | orchestrator | 2025-09-18 00:23:13.354307 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 00:23:13.354318 | orchestrator | Thursday 18 September 2025 00:23:12 +0000 (0:00:00.600) 0:00:21.930 **** 2025-09-18 00:23:13.354329 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:13.354339 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:13.354350 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:13.354360 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:13.354439 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.770457 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.770611 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.770627 | orchestrator | 2025-09-18 00:23:53.770640 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 00:23:53.770653 | orchestrator | Thursday 18 September 2025 00:23:13 +0000 (0:00:01.084) 0:00:23.015 **** 2025-09-18 00:23:53.770664 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.770676 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.770687 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.770698 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:53.770709 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.770720 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.770731 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.770742 | orchestrator | 2025-09-18 00:23:53.770753 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-18 00:23:53.770764 | orchestrator | Thursday 18 September 2025 00:23:30 +0000 (0:00:17.616) 0:00:40.631 **** 2025-09-18 00:23:53.770775 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.770786 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.770796 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.770807 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.770818 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.770828 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.770839 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.770850 | orchestrator | 2025-09-18 00:23:53.770861 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-18 00:23:53.770872 | orchestrator | Thursday 18 September 2025 00:23:31 +0000 (0:00:00.211) 0:00:40.843 **** 2025-09-18 00:23:53.770882 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.770893 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.770903 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.770914 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.770925 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.770935 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.770946 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.770957 | orchestrator | 2025-09-18 00:23:53.770968 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-18 00:23:53.770981 | orchestrator | Thursday 18 September 2025 00:23:31 +0000 (0:00:00.204) 0:00:41.047 **** 2025-09-18 00:23:53.770993 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.771005 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.771017 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.771030 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.771042 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.771054 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.771066 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.771078 | orchestrator | 2025-09-18 00:23:53.771091 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-18 00:23:53.771119 | orchestrator | Thursday 18 September 2025 00:23:31 +0000 (0:00:00.219) 0:00:41.267 **** 2025-09-18 00:23:53.771161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:53.771178 | orchestrator | 2025-09-18 00:23:53.771190 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-18 00:23:53.771204 | orchestrator | Thursday 18 September 2025 00:23:31 +0000 (0:00:00.292) 0:00:41.559 **** 2025-09-18 00:23:53.771216 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.771228 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.771240 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.771252 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.771265 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.771278 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.771290 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.771326 | orchestrator | 2025-09-18 00:23:53.771337 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-18 00:23:53.771348 | orchestrator | Thursday 18 September 2025 00:23:33 +0000 (0:00:01.849) 0:00:43.409 **** 2025-09-18 00:23:53.771359 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:53.771370 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:53.771380 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:53.771391 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:53.771401 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.771412 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.771423 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.771433 | orchestrator | 2025-09-18 00:23:53.771444 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-18 00:23:53.771455 | orchestrator | Thursday 18 September 2025 00:23:34 +0000 (0:00:01.125) 0:00:44.535 **** 2025-09-18 00:23:53.771482 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.771494 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.771505 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.771516 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.771526 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.771537 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.771548 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.771559 | orchestrator | 2025-09-18 00:23:53.771569 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-18 00:23:53.771580 | orchestrator | Thursday 18 September 2025 00:23:35 +0000 (0:00:00.797) 0:00:45.332 **** 2025-09-18 00:23:53.771592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:53.771605 | orchestrator | 2025-09-18 00:23:53.771616 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-18 00:23:53.771627 | orchestrator | Thursday 18 September 2025 00:23:35 +0000 (0:00:00.277) 0:00:45.610 **** 2025-09-18 00:23:53.771638 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:53.771649 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:53.771660 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:53.771671 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:53.771681 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.771692 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.771703 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.771713 | orchestrator | 2025-09-18 00:23:53.771741 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-18 00:23:53.771753 | orchestrator | Thursday 18 September 2025 00:23:36 +0000 (0:00:01.042) 0:00:46.652 **** 2025-09-18 00:23:53.771764 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:23:53.771775 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:23:53.771785 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:23:53.771796 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:23:53.771807 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:23:53.771817 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:23:53.771828 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:23:53.771839 | orchestrator | 2025-09-18 00:23:53.771850 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-18 00:23:53.771860 | orchestrator | Thursday 18 September 2025 00:23:37 +0000 (0:00:00.265) 0:00:46.918 **** 2025-09-18 00:23:53.771871 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:53.771882 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:53.771892 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.771903 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.771913 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:53.771924 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.771935 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:53.771953 | orchestrator | 2025-09-18 00:23:53.771964 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-18 00:23:53.771975 | orchestrator | Thursday 18 September 2025 00:23:48 +0000 (0:00:11.061) 0:00:57.979 **** 2025-09-18 00:23:53.771986 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.771996 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772007 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772018 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772028 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772039 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772049 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772060 | orchestrator | 2025-09-18 00:23:53.772071 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-18 00:23:53.772082 | orchestrator | Thursday 18 September 2025 00:23:49 +0000 (0:00:01.337) 0:00:59.317 **** 2025-09-18 00:23:53.772092 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772103 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.772114 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772124 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772135 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772145 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772156 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772166 | orchestrator | 2025-09-18 00:23:53.772177 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-18 00:23:53.772188 | orchestrator | Thursday 18 September 2025 00:23:50 +0000 (0:00:00.843) 0:01:00.160 **** 2025-09-18 00:23:53.772198 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772209 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.772220 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772230 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772241 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772251 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772262 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772273 | orchestrator | 2025-09-18 00:23:53.772284 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-18 00:23:53.772294 | orchestrator | Thursday 18 September 2025 00:23:50 +0000 (0:00:00.220) 0:01:00.381 **** 2025-09-18 00:23:53.772305 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772316 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.772326 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772337 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772348 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772358 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772369 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772379 | orchestrator | 2025-09-18 00:23:53.772390 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-18 00:23:53.772401 | orchestrator | Thursday 18 September 2025 00:23:50 +0000 (0:00:00.208) 0:01:00.589 **** 2025-09-18 00:23:53.772412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:23:53.772423 | orchestrator | 2025-09-18 00:23:53.772434 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-18 00:23:53.772444 | orchestrator | Thursday 18 September 2025 00:23:51 +0000 (0:00:00.276) 0:01:00.865 **** 2025-09-18 00:23:53.772455 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772500 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.772513 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772524 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772535 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772546 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772556 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772567 | orchestrator | 2025-09-18 00:23:53.772578 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-18 00:23:53.772588 | orchestrator | Thursday 18 September 2025 00:23:52 +0000 (0:00:01.723) 0:01:02.589 **** 2025-09-18 00:23:53.772607 | orchestrator | changed: [testbed-manager] 2025-09-18 00:23:53.772618 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:23:53.772628 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:23:53.772639 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:23:53.772650 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:23:53.772660 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:23:53.772671 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:23:53.772682 | orchestrator | 2025-09-18 00:23:53.772692 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-18 00:23:53.772703 | orchestrator | Thursday 18 September 2025 00:23:53 +0000 (0:00:00.620) 0:01:03.210 **** 2025-09-18 00:23:53.772714 | orchestrator | ok: [testbed-manager] 2025-09-18 00:23:53.772725 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:23:53.772735 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:23:53.772746 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:23:53.772757 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:23:53.772767 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:23:53.772778 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:23:53.772788 | orchestrator | 2025-09-18 00:23:53.772806 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-18 00:26:13.549320 | orchestrator | Thursday 18 September 2025 00:23:53 +0000 (0:00:00.222) 0:01:03.432 **** 2025-09-18 00:26:13.549472 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:13.549489 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:13.549500 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:13.549512 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:13.549523 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:13.549534 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:13.549545 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:13.549619 | orchestrator | 2025-09-18 00:26:13.549633 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-18 00:26:13.549645 | orchestrator | Thursday 18 September 2025 00:23:55 +0000 (0:00:01.298) 0:01:04.731 **** 2025-09-18 00:26:13.549656 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:13.549669 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:13.549681 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:13.549692 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:13.549703 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:13.549714 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:13.549725 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:13.549736 | orchestrator | 2025-09-18 00:26:13.549748 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-18 00:26:13.549760 | orchestrator | Thursday 18 September 2025 00:23:57 +0000 (0:00:01.965) 0:01:06.697 **** 2025-09-18 00:26:13.549771 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:13.549782 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:13.549793 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:13.549829 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:13.549842 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:13.549855 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:13.549868 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:13.549881 | orchestrator | 2025-09-18 00:26:13.549894 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-18 00:26:13.549907 | orchestrator | Thursday 18 September 2025 00:23:59 +0000 (0:00:02.639) 0:01:09.337 **** 2025-09-18 00:26:13.549919 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:13.549931 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:13.549943 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:13.549957 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:13.549969 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:13.549982 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:13.549994 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:13.550007 | orchestrator | 2025-09-18 00:26:13.550094 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-18 00:26:13.550136 | orchestrator | Thursday 18 September 2025 00:24:37 +0000 (0:00:37.976) 0:01:47.313 **** 2025-09-18 00:26:13.550150 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:13.550163 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:13.550175 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:13.550186 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:13.550197 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:13.550207 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:13.550218 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:13.550229 | orchestrator | 2025-09-18 00:26:13.550246 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-18 00:26:13.550258 | orchestrator | Thursday 18 September 2025 00:25:56 +0000 (0:01:18.924) 0:03:06.238 **** 2025-09-18 00:26:13.550269 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:13.550281 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:13.550291 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:13.550302 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:13.550313 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:13.550323 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:13.550334 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:13.550345 | orchestrator | 2025-09-18 00:26:13.550356 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-18 00:26:13.550368 | orchestrator | Thursday 18 September 2025 00:25:58 +0000 (0:00:01.964) 0:03:08.203 **** 2025-09-18 00:26:13.550379 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:13.550390 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:13.550400 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:13.550411 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:13.550421 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:13.550432 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:13.550443 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:13.550453 | orchestrator | 2025-09-18 00:26:13.550464 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-18 00:26:13.550475 | orchestrator | Thursday 18 September 2025 00:26:11 +0000 (0:00:12.724) 0:03:20.928 **** 2025-09-18 00:26:13.550497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-18 00:26:13.550521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-18 00:26:13.550584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-18 00:26:13.550600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-18 00:26:13.550621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-18 00:26:13.550633 | orchestrator | 2025-09-18 00:26:13.550644 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-18 00:26:13.550655 | orchestrator | Thursday 18 September 2025 00:26:11 +0000 (0:00:00.383) 0:03:21.311 **** 2025-09-18 00:26:13.550666 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 00:26:13.550677 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:13.550688 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 00:26:13.550699 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 00:26:13.550710 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:26:13.550721 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:26:13.550731 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 00:26:13.550742 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:26:13.550753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:26:13.550764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:26:13.550774 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:26:13.550785 | orchestrator | 2025-09-18 00:26:13.550796 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-18 00:26:13.550812 | orchestrator | Thursday 18 September 2025 00:26:13 +0000 (0:00:01.739) 0:03:23.050 **** 2025-09-18 00:26:13.550823 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 00:26:13.550836 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 00:26:13.550846 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 00:26:13.550857 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 00:26:13.550868 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 00:26:13.550879 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 00:26:13.550889 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 00:26:13.550900 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 00:26:13.550911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 00:26:13.550921 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 00:26:13.550932 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 00:26:13.550943 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 00:26:13.550954 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 00:26:13.550964 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 00:26:13.550975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 00:26:13.550986 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 00:26:13.550997 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 00:26:13.551014 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 00:26:13.551025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 00:26:13.551036 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:13.551047 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 00:26:13.551064 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 00:26:22.118350 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 00:26:22.118483 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 00:26:22.118502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 00:26:22.118514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 00:26:22.118525 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 00:26:22.118536 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 00:26:22.118548 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 00:26:22.118610 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 00:26:22.118623 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 00:26:22.118634 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 00:26:22.118646 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 00:26:22.118658 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:26:22.118670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 00:26:22.118681 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 00:26:22.118692 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 00:26:22.118703 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 00:26:22.118714 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 00:26:22.118725 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 00:26:22.118736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 00:26:22.118746 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 00:26:22.118759 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:26:22.118770 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:26:22.118781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 00:26:22.118792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 00:26:22.118803 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 00:26:22.118813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 00:26:22.118825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 00:26:22.118835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 00:26:22.118846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 00:26:22.118882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 00:26:22.118895 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 00:26:22.118908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 00:26:22.118920 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 00:26:22.118933 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 00:26:22.118945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 00:26:22.118958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 00:26:22.118970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 00:26:22.118983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 00:26:22.118995 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 00:26:22.119007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 00:26:22.119020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 00:26:22.119033 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 00:26:22.119045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 00:26:22.119077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 00:26:22.119090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 00:26:22.119102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 00:26:22.119115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 00:26:22.119127 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 00:26:22.119139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 00:26:22.119151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 00:26:22.119164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 00:26:22.119176 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 00:26:22.119189 | orchestrator | 2025-09-18 00:26:22.119202 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-18 00:26:22.119215 | orchestrator | Thursday 18 September 2025 00:26:19 +0000 (0:00:05.992) 0:03:29.043 **** 2025-09-18 00:26:22.119227 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119260 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119281 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 00:26:22.119303 | orchestrator | 2025-09-18 00:26:22.119314 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-18 00:26:22.119333 | orchestrator | Thursday 18 September 2025 00:26:19 +0000 (0:00:00.585) 0:03:29.628 **** 2025-09-18 00:26:22.119344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 00:26:22.119355 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:22.119388 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 00:26:22.119399 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:26:22.119410 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 00:26:22.119421 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:26:22.119432 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 00:26:22.119444 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:26:22.119456 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 00:26:22.119477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 00:26:22.119497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 00:26:22.119517 | orchestrator | 2025-09-18 00:26:22.119537 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-18 00:26:22.119577 | orchestrator | Thursday 18 September 2025 00:26:21 +0000 (0:00:01.449) 0:03:31.077 **** 2025-09-18 00:26:22.119597 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 00:26:22.119617 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:22.119637 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 00:26:22.119656 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:26:22.119667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 00:26:22.119678 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:26:22.119689 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 00:26:22.119699 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:26:22.119710 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 00:26:22.119721 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 00:26:22.119732 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 00:26:22.119743 | orchestrator | 2025-09-18 00:26:22.119754 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-18 00:26:22.119764 | orchestrator | Thursday 18 September 2025 00:26:21 +0000 (0:00:00.504) 0:03:31.582 **** 2025-09-18 00:26:22.119775 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:22.119786 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:26:22.119804 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:26:22.119828 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:26:22.119852 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:26:22.119881 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:26:34.987669 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:26:34.987824 | orchestrator | 2025-09-18 00:26:34.987841 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-18 00:26:34.987856 | orchestrator | Thursday 18 September 2025 00:26:22 +0000 (0:00:00.204) 0:03:31.786 **** 2025-09-18 00:26:34.987867 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:34.987881 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:34.987892 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:34.987904 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:34.987941 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:34.987952 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:34.987963 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:34.987974 | orchestrator | 2025-09-18 00:26:34.987985 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-18 00:26:34.987996 | orchestrator | Thursday 18 September 2025 00:26:27 +0000 (0:00:05.730) 0:03:37.516 **** 2025-09-18 00:26:34.988007 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-18 00:26:34.988018 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-18 00:26:34.988029 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:34.988040 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-18 00:26:34.988051 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:26:34.988062 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-18 00:26:34.988073 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:26:34.988084 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:26:34.988095 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-18 00:26:34.988105 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:26:34.988116 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-18 00:26:34.988133 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:26:34.988145 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-18 00:26:34.988157 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:26:34.988169 | orchestrator | 2025-09-18 00:26:34.988182 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-18 00:26:34.988195 | orchestrator | Thursday 18 September 2025 00:26:28 +0000 (0:00:00.301) 0:03:37.817 **** 2025-09-18 00:26:34.988207 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-18 00:26:34.988220 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-18 00:26:34.988232 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-18 00:26:34.988244 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-18 00:26:34.988257 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-18 00:26:34.988269 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-18 00:26:34.988281 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-18 00:26:34.988294 | orchestrator | 2025-09-18 00:26:34.988306 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-18 00:26:34.988318 | orchestrator | Thursday 18 September 2025 00:26:29 +0000 (0:00:01.192) 0:03:39.009 **** 2025-09-18 00:26:34.988351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:26:34.988366 | orchestrator | 2025-09-18 00:26:34.988377 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-18 00:26:34.988388 | orchestrator | Thursday 18 September 2025 00:26:29 +0000 (0:00:00.534) 0:03:39.544 **** 2025-09-18 00:26:34.988398 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:34.988409 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:34.988420 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:34.988431 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:34.988441 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:34.988452 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:34.988462 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:34.988473 | orchestrator | 2025-09-18 00:26:34.988484 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-18 00:26:34.988495 | orchestrator | Thursday 18 September 2025 00:26:31 +0000 (0:00:01.302) 0:03:40.847 **** 2025-09-18 00:26:34.988505 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:34.988516 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:34.988527 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:34.988538 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:34.988548 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:34.988590 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:34.988622 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:34.988641 | orchestrator | 2025-09-18 00:26:34.988662 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-18 00:26:34.988674 | orchestrator | Thursday 18 September 2025 00:26:32 +0000 (0:00:01.382) 0:03:42.229 **** 2025-09-18 00:26:34.988685 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:34.988695 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:34.988706 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:34.988717 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:34.988727 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:34.988738 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:34.988748 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:34.988759 | orchestrator | 2025-09-18 00:26:34.988769 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-18 00:26:34.988780 | orchestrator | Thursday 18 September 2025 00:26:33 +0000 (0:00:00.633) 0:03:42.863 **** 2025-09-18 00:26:34.988791 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:34.988802 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:34.988812 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:34.988823 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:34.988833 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:34.988844 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:34.988854 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:34.988865 | orchestrator | 2025-09-18 00:26:34.988875 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-18 00:26:34.988886 | orchestrator | Thursday 18 September 2025 00:26:33 +0000 (0:00:00.748) 0:03:43.611 **** 2025-09-18 00:26:34.988924 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153843.1597176, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.988941 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153865.8099782, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.988953 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153886.2379353, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.988971 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153873.7106898, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.988983 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153881.0564969, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.989002 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153866.1486754, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.989013 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758153877.0842237, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:34.989045 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061236 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061389 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061419 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061440 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061491 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061513 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:26:53.061532 | orchestrator | 2025-09-18 00:26:53.061615 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-18 00:26:53.061642 | orchestrator | Thursday 18 September 2025 00:26:34 +0000 (0:00:01.035) 0:03:44.647 **** 2025-09-18 00:26:53.061662 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:53.061682 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:53.061700 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:53.061718 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:53.061737 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:53.061757 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:53.061775 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:53.061792 | orchestrator | 2025-09-18 00:26:53.061812 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-18 00:26:53.061834 | orchestrator | Thursday 18 September 2025 00:26:36 +0000 (0:00:01.118) 0:03:45.765 **** 2025-09-18 00:26:53.061853 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:53.061871 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:53.061892 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:53.061911 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:53.061950 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:53.061964 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:53.061977 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:53.061989 | orchestrator | 2025-09-18 00:26:53.062001 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-18 00:26:53.062011 | orchestrator | Thursday 18 September 2025 00:26:37 +0000 (0:00:01.212) 0:03:46.978 **** 2025-09-18 00:26:53.062091 | orchestrator | changed: [testbed-manager] 2025-09-18 00:26:53.062119 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:53.062129 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:53.062139 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:53.062148 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:53.062158 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:53.062167 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:53.062176 | orchestrator | 2025-09-18 00:26:53.062186 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-18 00:26:53.062196 | orchestrator | Thursday 18 September 2025 00:26:38 +0000 (0:00:01.209) 0:03:48.188 **** 2025-09-18 00:26:53.062218 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:26:53.062227 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:26:53.062237 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:26:53.062246 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:26:53.062255 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:26:53.062265 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:26:53.062275 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:26:53.062284 | orchestrator | 2025-09-18 00:26:53.062294 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-18 00:26:53.062304 | orchestrator | Thursday 18 September 2025 00:26:38 +0000 (0:00:00.313) 0:03:48.501 **** 2025-09-18 00:26:53.062314 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062325 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062334 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062344 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062353 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.062363 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:53.062372 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:53.062382 | orchestrator | 2025-09-18 00:26:53.062391 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-18 00:26:53.062401 | orchestrator | Thursday 18 September 2025 00:26:39 +0000 (0:00:00.770) 0:03:49.272 **** 2025-09-18 00:26:53.062417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:26:53.062430 | orchestrator | 2025-09-18 00:26:53.062440 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-18 00:26:53.062450 | orchestrator | Thursday 18 September 2025 00:26:40 +0000 (0:00:00.422) 0:03:49.695 **** 2025-09-18 00:26:53.062459 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062469 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:26:53.062478 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:26:53.062488 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:26:53.062497 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:26:53.062507 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:26:53.062516 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:26:53.062526 | orchestrator | 2025-09-18 00:26:53.062535 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-18 00:26:53.062545 | orchestrator | Thursday 18 September 2025 00:26:49 +0000 (0:00:09.355) 0:03:59.050 **** 2025-09-18 00:26:53.062571 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062582 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062591 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062601 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062610 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:53.062620 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:53.062629 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.062639 | orchestrator | 2025-09-18 00:26:53.062649 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-18 00:26:53.062659 | orchestrator | Thursday 18 September 2025 00:26:50 +0000 (0:00:01.377) 0:04:00.428 **** 2025-09-18 00:26:53.062668 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062678 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062687 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062697 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062706 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.062716 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:53.062725 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:53.062735 | orchestrator | 2025-09-18 00:26:53.062744 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-18 00:26:53.062754 | orchestrator | Thursday 18 September 2025 00:26:52 +0000 (0:00:01.299) 0:04:01.727 **** 2025-09-18 00:26:53.062763 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062779 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062788 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062797 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062807 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.062816 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:53.062826 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:53.062835 | orchestrator | 2025-09-18 00:26:53.062845 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-18 00:26:53.062856 | orchestrator | Thursday 18 September 2025 00:26:52 +0000 (0:00:00.272) 0:04:02.000 **** 2025-09-18 00:26:53.062865 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062875 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062884 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062893 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062903 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.062912 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:26:53.062922 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:26:53.062931 | orchestrator | 2025-09-18 00:26:53.062941 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-18 00:26:53.062950 | orchestrator | Thursday 18 September 2025 00:26:52 +0000 (0:00:00.425) 0:04:02.425 **** 2025-09-18 00:26:53.062960 | orchestrator | ok: [testbed-manager] 2025-09-18 00:26:53.062969 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:26:53.062979 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:26:53.062988 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:26:53.062997 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:26:53.063015 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:02.192433 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:02.192628 | orchestrator | 2025-09-18 00:28:02.192657 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-18 00:28:02.192680 | orchestrator | Thursday 18 September 2025 00:26:53 +0000 (0:00:00.301) 0:04:02.727 **** 2025-09-18 00:28:02.192699 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:02.192719 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:02.192740 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:02.192761 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:02.192782 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:02.192794 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:02.192805 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:02.192816 | orchestrator | 2025-09-18 00:28:02.192827 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-18 00:28:02.192839 | orchestrator | Thursday 18 September 2025 00:26:58 +0000 (0:00:05.761) 0:04:08.488 **** 2025-09-18 00:28:02.192852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:28:02.192866 | orchestrator | 2025-09-18 00:28:02.192878 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-18 00:28:02.192888 | orchestrator | Thursday 18 September 2025 00:26:59 +0000 (0:00:00.385) 0:04:08.874 **** 2025-09-18 00:28:02.192900 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.192911 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-18 00:28:02.192922 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.192933 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:02.192944 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-18 00:28:02.192955 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.192966 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-18 00:28:02.192980 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:02.192993 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.193006 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-18 00:28:02.193018 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:02.193071 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.193116 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-18 00:28:02.193144 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:02.193163 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.193183 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:02.193201 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-18 00:28:02.193219 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:02.193230 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-18 00:28:02.193241 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-18 00:28:02.193252 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:02.193262 | orchestrator | 2025-09-18 00:28:02.193273 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-18 00:28:02.193284 | orchestrator | Thursday 18 September 2025 00:26:59 +0000 (0:00:00.344) 0:04:09.219 **** 2025-09-18 00:28:02.193295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:28:02.193306 | orchestrator | 2025-09-18 00:28:02.193317 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-18 00:28:02.193329 | orchestrator | Thursday 18 September 2025 00:26:59 +0000 (0:00:00.374) 0:04:09.594 **** 2025-09-18 00:28:02.193340 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-18 00:28:02.193351 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:02.193361 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-18 00:28:02.193372 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-18 00:28:02.193383 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:02.193393 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-18 00:28:02.193404 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:02.193415 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:02.193425 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-18 00:28:02.193436 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-18 00:28:02.193447 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:02.193458 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:02.193469 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-18 00:28:02.193479 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:02.193490 | orchestrator | 2025-09-18 00:28:02.193501 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-18 00:28:02.193511 | orchestrator | Thursday 18 September 2025 00:27:00 +0000 (0:00:00.320) 0:04:09.914 **** 2025-09-18 00:28:02.193549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:28:02.193560 | orchestrator | 2025-09-18 00:28:02.193571 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-18 00:28:02.193582 | orchestrator | Thursday 18 September 2025 00:27:00 +0000 (0:00:00.416) 0:04:10.330 **** 2025-09-18 00:28:02.193593 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.193624 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.193636 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.193647 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.193658 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.193668 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.193679 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.193690 | orchestrator | 2025-09-18 00:28:02.193701 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-18 00:28:02.193724 | orchestrator | Thursday 18 September 2025 00:27:34 +0000 (0:00:33.787) 0:04:44.118 **** 2025-09-18 00:28:02.193735 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.193746 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.193757 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.193768 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.193778 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.193789 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.193800 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.193811 | orchestrator | 2025-09-18 00:28:02.193822 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-18 00:28:02.193833 | orchestrator | Thursday 18 September 2025 00:27:42 +0000 (0:00:08.000) 0:04:52.118 **** 2025-09-18 00:28:02.193843 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.193854 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.193865 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.193876 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.193886 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.193897 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.193908 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.193919 | orchestrator | 2025-09-18 00:28:02.193930 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-18 00:28:02.193941 | orchestrator | Thursday 18 September 2025 00:27:50 +0000 (0:00:07.630) 0:04:59.749 **** 2025-09-18 00:28:02.193952 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:02.193963 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:02.193974 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:02.193985 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:02.193995 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:02.194006 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:02.194075 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:02.194088 | orchestrator | 2025-09-18 00:28:02.194099 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-18 00:28:02.194111 | orchestrator | Thursday 18 September 2025 00:27:51 +0000 (0:00:01.870) 0:05:01.619 **** 2025-09-18 00:28:02.194122 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.194133 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.194152 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.194163 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.194174 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.194184 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.194196 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.194206 | orchestrator | 2025-09-18 00:28:02.194217 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-18 00:28:02.194228 | orchestrator | Thursday 18 September 2025 00:27:58 +0000 (0:00:06.107) 0:05:07.727 **** 2025-09-18 00:28:02.194240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:28:02.194253 | orchestrator | 2025-09-18 00:28:02.194264 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-18 00:28:02.194275 | orchestrator | Thursday 18 September 2025 00:27:58 +0000 (0:00:00.556) 0:05:08.283 **** 2025-09-18 00:28:02.194285 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.194296 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.194307 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.194318 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.194329 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.194340 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.194350 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.194361 | orchestrator | 2025-09-18 00:28:02.194372 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-18 00:28:02.194390 | orchestrator | Thursday 18 September 2025 00:27:59 +0000 (0:00:00.760) 0:05:09.043 **** 2025-09-18 00:28:02.194402 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:02.194413 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:02.194424 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:02.194434 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:02.194445 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:02.194456 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:02.194467 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:02.194478 | orchestrator | 2025-09-18 00:28:02.194489 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-18 00:28:02.194500 | orchestrator | Thursday 18 September 2025 00:28:01 +0000 (0:00:01.704) 0:05:10.748 **** 2025-09-18 00:28:02.194511 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:02.194542 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:02.194553 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:02.194564 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:02.194575 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:02.194586 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:02.194597 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:02.194608 | orchestrator | 2025-09-18 00:28:02.194619 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-18 00:28:02.194630 | orchestrator | Thursday 18 September 2025 00:28:01 +0000 (0:00:00.821) 0:05:11.569 **** 2025-09-18 00:28:02.194641 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:02.194652 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:02.194662 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:02.194673 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:02.194687 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:02.194705 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:02.194725 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:02.194743 | orchestrator | 2025-09-18 00:28:02.194763 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-18 00:28:02.194783 | orchestrator | Thursday 18 September 2025 00:28:02 +0000 (0:00:00.288) 0:05:11.857 **** 2025-09-18 00:28:29.169951 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:29.170171 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:29.170189 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:29.170202 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:29.170214 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:29.170226 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:29.170237 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:29.170248 | orchestrator | 2025-09-18 00:28:29.170261 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-18 00:28:29.170274 | orchestrator | Thursday 18 September 2025 00:28:02 +0000 (0:00:00.386) 0:05:12.244 **** 2025-09-18 00:28:29.170285 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.170298 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:29.170308 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:29.170319 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:29.170330 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:29.170341 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:29.170351 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:29.170362 | orchestrator | 2025-09-18 00:28:29.170373 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-18 00:28:29.170384 | orchestrator | Thursday 18 September 2025 00:28:02 +0000 (0:00:00.295) 0:05:12.539 **** 2025-09-18 00:28:29.170396 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:29.170407 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:29.170418 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:29.170428 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:29.170439 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:29.170452 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:29.170464 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:29.170529 | orchestrator | 2025-09-18 00:28:29.170543 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-18 00:28:29.170556 | orchestrator | Thursday 18 September 2025 00:28:03 +0000 (0:00:00.319) 0:05:12.859 **** 2025-09-18 00:28:29.170568 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.170581 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:29.170594 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:29.170606 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:29.170619 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:29.170631 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:29.170643 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:29.170656 | orchestrator | 2025-09-18 00:28:29.170668 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-18 00:28:29.170681 | orchestrator | Thursday 18 September 2025 00:28:03 +0000 (0:00:00.295) 0:05:13.154 **** 2025-09-18 00:28:29.170694 | orchestrator | ok: [testbed-manager] =>  2025-09-18 00:28:29.170707 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170719 | orchestrator | ok: [testbed-node-3] =>  2025-09-18 00:28:29.170732 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170745 | orchestrator | ok: [testbed-node-4] =>  2025-09-18 00:28:29.170757 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170769 | orchestrator | ok: [testbed-node-5] =>  2025-09-18 00:28:29.170782 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170794 | orchestrator | ok: [testbed-node-0] =>  2025-09-18 00:28:29.170806 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170817 | orchestrator | ok: [testbed-node-1] =>  2025-09-18 00:28:29.170828 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170839 | orchestrator | ok: [testbed-node-2] =>  2025-09-18 00:28:29.170849 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 00:28:29.170860 | orchestrator | 2025-09-18 00:28:29.170871 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-18 00:28:29.170881 | orchestrator | Thursday 18 September 2025 00:28:03 +0000 (0:00:00.294) 0:05:13.449 **** 2025-09-18 00:28:29.170892 | orchestrator | ok: [testbed-manager] =>  2025-09-18 00:28:29.170903 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.170914 | orchestrator | ok: [testbed-node-3] =>  2025-09-18 00:28:29.170925 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.170936 | orchestrator | ok: [testbed-node-4] =>  2025-09-18 00:28:29.170946 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.170957 | orchestrator | ok: [testbed-node-5] =>  2025-09-18 00:28:29.170968 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.170978 | orchestrator | ok: [testbed-node-0] =>  2025-09-18 00:28:29.170989 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.170999 | orchestrator | ok: [testbed-node-1] =>  2025-09-18 00:28:29.171010 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.171021 | orchestrator | ok: [testbed-node-2] =>  2025-09-18 00:28:29.171031 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 00:28:29.171042 | orchestrator | 2025-09-18 00:28:29.171053 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-18 00:28:29.171064 | orchestrator | Thursday 18 September 2025 00:28:04 +0000 (0:00:00.311) 0:05:13.761 **** 2025-09-18 00:28:29.171074 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:29.171085 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:29.171096 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:29.171106 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:29.171117 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:29.171128 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:29.171138 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:29.171149 | orchestrator | 2025-09-18 00:28:29.171160 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-18 00:28:29.171171 | orchestrator | Thursday 18 September 2025 00:28:04 +0000 (0:00:00.278) 0:05:14.039 **** 2025-09-18 00:28:29.171181 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:29.171201 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:29.171211 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:29.171222 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:29.171233 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:29.171244 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:29.171254 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:29.171265 | orchestrator | 2025-09-18 00:28:29.171276 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-18 00:28:29.171287 | orchestrator | Thursday 18 September 2025 00:28:04 +0000 (0:00:00.269) 0:05:14.308 **** 2025-09-18 00:28:29.171319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:28:29.171334 | orchestrator | 2025-09-18 00:28:29.171346 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-18 00:28:29.171357 | orchestrator | Thursday 18 September 2025 00:28:05 +0000 (0:00:00.408) 0:05:14.717 **** 2025-09-18 00:28:29.171367 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.171378 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:29.171389 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:29.171400 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:29.171411 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:29.171421 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:29.171432 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:29.171443 | orchestrator | 2025-09-18 00:28:29.171454 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-18 00:28:29.171464 | orchestrator | Thursday 18 September 2025 00:28:06 +0000 (0:00:01.024) 0:05:15.742 **** 2025-09-18 00:28:29.171475 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.171486 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:28:29.171530 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:28:29.171543 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:28:29.171554 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:28:29.171564 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:28:29.171586 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:28:29.171598 | orchestrator | 2025-09-18 00:28:29.171608 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-18 00:28:29.171621 | orchestrator | Thursday 18 September 2025 00:28:09 +0000 (0:00:03.344) 0:05:19.086 **** 2025-09-18 00:28:29.171632 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-18 00:28:29.171643 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-18 00:28:29.171654 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-18 00:28:29.171664 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-18 00:28:29.171675 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-18 00:28:29.171686 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-18 00:28:29.171697 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:28:29.171708 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-18 00:28:29.171718 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-18 00:28:29.171729 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-18 00:28:29.171740 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:28:29.171751 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-18 00:28:29.171767 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-18 00:28:29.171778 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-18 00:28:29.171789 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:28:29.171799 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-18 00:28:29.171810 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-18 00:28:29.171821 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-18 00:28:29.171840 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:28:29.171852 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-18 00:28:29.171862 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-18 00:28:29.171873 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-18 00:28:29.171884 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:28:29.171894 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:28:29.171905 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-18 00:28:29.171916 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-18 00:28:29.171926 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-18 00:28:29.171937 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:28:29.171947 | orchestrator | 2025-09-18 00:28:29.171958 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-18 00:28:29.171969 | orchestrator | Thursday 18 September 2025 00:28:10 +0000 (0:00:00.648) 0:05:19.735 **** 2025-09-18 00:28:29.171980 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.171990 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:29.172001 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:29.172012 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:29.172022 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:29.172033 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:29.172044 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:29.172054 | orchestrator | 2025-09-18 00:28:29.172065 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-18 00:28:29.172076 | orchestrator | Thursday 18 September 2025 00:28:16 +0000 (0:00:06.468) 0:05:26.204 **** 2025-09-18 00:28:29.172087 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:29.172097 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.172108 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:29.172119 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:29.172129 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:29.172140 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:29.172150 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:29.172161 | orchestrator | 2025-09-18 00:28:29.172172 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-18 00:28:29.172183 | orchestrator | Thursday 18 September 2025 00:28:18 +0000 (0:00:01.669) 0:05:27.873 **** 2025-09-18 00:28:29.172193 | orchestrator | ok: [testbed-manager] 2025-09-18 00:28:29.172204 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:29.172215 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:29.172225 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:28:29.172236 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:28:29.172246 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:28:29.172257 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:28:29.172267 | orchestrator | 2025-09-18 00:28:29.172278 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-18 00:28:29.172289 | orchestrator | Thursday 18 September 2025 00:28:25 +0000 (0:00:07.761) 0:05:35.634 **** 2025-09-18 00:28:29.172300 | orchestrator | changed: [testbed-manager] 2025-09-18 00:28:29.172310 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:28:29.172321 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:28:29.172339 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.248364 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.248533 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.248559 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.248572 | orchestrator | 2025-09-18 00:29:14.248585 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-18 00:29:14.248598 | orchestrator | Thursday 18 September 2025 00:28:29 +0000 (0:00:03.197) 0:05:38.832 **** 2025-09-18 00:29:14.248609 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.248621 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.248632 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.248667 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.248679 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.248689 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.248700 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.248711 | orchestrator | 2025-09-18 00:29:14.248722 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-18 00:29:14.248732 | orchestrator | Thursday 18 September 2025 00:28:30 +0000 (0:00:01.316) 0:05:40.148 **** 2025-09-18 00:29:14.248743 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.248754 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.248765 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.248775 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.248786 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.248796 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.248807 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.248818 | orchestrator | 2025-09-18 00:29:14.248828 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-18 00:29:14.248839 | orchestrator | Thursday 18 September 2025 00:28:31 +0000 (0:00:01.360) 0:05:41.509 **** 2025-09-18 00:29:14.248850 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.248860 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.248871 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.248881 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.248892 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.248902 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.248913 | orchestrator | changed: [testbed-manager] 2025-09-18 00:29:14.248923 | orchestrator | 2025-09-18 00:29:14.248934 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-18 00:29:14.248944 | orchestrator | Thursday 18 September 2025 00:28:32 +0000 (0:00:00.808) 0:05:42.317 **** 2025-09-18 00:29:14.248955 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.248967 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.248977 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.248988 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.249013 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.249024 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.249035 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.249045 | orchestrator | 2025-09-18 00:29:14.249056 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-18 00:29:14.249067 | orchestrator | Thursday 18 September 2025 00:28:42 +0000 (0:00:09.974) 0:05:52.291 **** 2025-09-18 00:29:14.249077 | orchestrator | changed: [testbed-manager] 2025-09-18 00:29:14.249088 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.249099 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.249109 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.249120 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.249130 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.249141 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.249151 | orchestrator | 2025-09-18 00:29:14.249162 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-18 00:29:14.249173 | orchestrator | Thursday 18 September 2025 00:28:43 +0000 (0:00:00.961) 0:05:53.253 **** 2025-09-18 00:29:14.249183 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.249194 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.249204 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.249215 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.249225 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.249236 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.249246 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.249257 | orchestrator | 2025-09-18 00:29:14.249273 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-18 00:29:14.249290 | orchestrator | Thursday 18 September 2025 00:28:52 +0000 (0:00:08.967) 0:06:02.220 **** 2025-09-18 00:29:14.249317 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.249334 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.249350 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.249367 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.249384 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.249401 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.249418 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.249435 | orchestrator | 2025-09-18 00:29:14.249455 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-18 00:29:14.249499 | orchestrator | Thursday 18 September 2025 00:29:03 +0000 (0:00:11.395) 0:06:13.616 **** 2025-09-18 00:29:14.249512 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-18 00:29:14.249523 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-18 00:29:14.249534 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-18 00:29:14.249545 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-18 00:29:14.249555 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-18 00:29:14.249566 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-18 00:29:14.249576 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-18 00:29:14.249586 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-18 00:29:14.249597 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-18 00:29:14.249607 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-18 00:29:14.249618 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-18 00:29:14.249628 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-18 00:29:14.249639 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-18 00:29:14.249649 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-18 00:29:14.249660 | orchestrator | 2025-09-18 00:29:14.249670 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-18 00:29:14.249702 | orchestrator | Thursday 18 September 2025 00:29:05 +0000 (0:00:01.225) 0:06:14.842 **** 2025-09-18 00:29:14.249713 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.249724 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.249734 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.249745 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.249755 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.249766 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.249776 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.249787 | orchestrator | 2025-09-18 00:29:14.249797 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-18 00:29:14.249808 | orchestrator | Thursday 18 September 2025 00:29:05 +0000 (0:00:00.547) 0:06:15.389 **** 2025-09-18 00:29:14.249819 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.249829 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:14.249840 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:14.249850 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:14.249860 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:14.249871 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:14.249881 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:14.249892 | orchestrator | 2025-09-18 00:29:14.249902 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-18 00:29:14.249914 | orchestrator | Thursday 18 September 2025 00:29:09 +0000 (0:00:03.849) 0:06:19.239 **** 2025-09-18 00:29:14.249925 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.249935 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.249946 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.249956 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.249967 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.249991 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.250002 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.250091 | orchestrator | 2025-09-18 00:29:14.250104 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-18 00:29:14.250116 | orchestrator | Thursday 18 September 2025 00:29:10 +0000 (0:00:00.524) 0:06:19.764 **** 2025-09-18 00:29:14.250127 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-18 00:29:14.250138 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-18 00:29:14.250149 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.250160 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-18 00:29:14.250178 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-18 00:29:14.250189 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.250200 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-18 00:29:14.250211 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-18 00:29:14.250221 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.250232 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-18 00:29:14.250243 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-18 00:29:14.250254 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.250265 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-18 00:29:14.250275 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-18 00:29:14.250286 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.250297 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-18 00:29:14.250307 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-18 00:29:14.250318 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.250329 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-18 00:29:14.250339 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-18 00:29:14.250350 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.250361 | orchestrator | 2025-09-18 00:29:14.250372 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-18 00:29:14.250383 | orchestrator | Thursday 18 September 2025 00:29:10 +0000 (0:00:00.765) 0:06:20.529 **** 2025-09-18 00:29:14.250394 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.250404 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.250415 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.250426 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.250437 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.250447 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.250458 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.250469 | orchestrator | 2025-09-18 00:29:14.250502 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-18 00:29:14.250513 | orchestrator | Thursday 18 September 2025 00:29:11 +0000 (0:00:00.596) 0:06:21.125 **** 2025-09-18 00:29:14.250524 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.250535 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.250545 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.250556 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.250566 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.250577 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.250587 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.250598 | orchestrator | 2025-09-18 00:29:14.250608 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-18 00:29:14.250619 | orchestrator | Thursday 18 September 2025 00:29:11 +0000 (0:00:00.533) 0:06:21.659 **** 2025-09-18 00:29:14.250630 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:14.250640 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:14.250651 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:29:14.250661 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:29:14.250672 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:29:14.250691 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:29:14.250702 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:29:14.250713 | orchestrator | 2025-09-18 00:29:14.250724 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-18 00:29:14.250735 | orchestrator | Thursday 18 September 2025 00:29:12 +0000 (0:00:00.532) 0:06:22.192 **** 2025-09-18 00:29:14.250745 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:14.250765 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.416750 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.416845 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.416856 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.416864 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.416871 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.416878 | orchestrator | 2025-09-18 00:29:36.416886 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-18 00:29:36.416895 | orchestrator | Thursday 18 September 2025 00:29:14 +0000 (0:00:01.724) 0:06:23.916 **** 2025-09-18 00:29:36.416903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:29:36.416911 | orchestrator | 2025-09-18 00:29:36.416918 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-18 00:29:36.416925 | orchestrator | Thursday 18 September 2025 00:29:15 +0000 (0:00:01.022) 0:06:24.939 **** 2025-09-18 00:29:36.416932 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.416939 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.416946 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.416953 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.416959 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.416966 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.416972 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.416979 | orchestrator | 2025-09-18 00:29:36.416986 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-18 00:29:36.416993 | orchestrator | Thursday 18 September 2025 00:29:16 +0000 (0:00:00.863) 0:06:25.802 **** 2025-09-18 00:29:36.416999 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417006 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.417013 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.417020 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.417027 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.417034 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.417040 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.417047 | orchestrator | 2025-09-18 00:29:36.417054 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-18 00:29:36.417060 | orchestrator | Thursday 18 September 2025 00:29:16 +0000 (0:00:00.854) 0:06:26.657 **** 2025-09-18 00:29:36.417067 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417074 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.417080 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.417102 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.417109 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.417116 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.417122 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.417129 | orchestrator | 2025-09-18 00:29:36.417136 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-18 00:29:36.417144 | orchestrator | Thursday 18 September 2025 00:29:18 +0000 (0:00:01.337) 0:06:27.994 **** 2025-09-18 00:29:36.417150 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:36.417157 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.417164 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.417170 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.417177 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417183 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417207 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417214 | orchestrator | 2025-09-18 00:29:36.417232 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-18 00:29:36.417240 | orchestrator | Thursday 18 September 2025 00:29:19 +0000 (0:00:01.532) 0:06:29.527 **** 2025-09-18 00:29:36.417247 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417253 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.417260 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.417266 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.417273 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.417279 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.417286 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.417292 | orchestrator | 2025-09-18 00:29:36.417299 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-18 00:29:36.417306 | orchestrator | Thursday 18 September 2025 00:29:21 +0000 (0:00:01.394) 0:06:30.922 **** 2025-09-18 00:29:36.417313 | orchestrator | changed: [testbed-manager] 2025-09-18 00:29:36.417320 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.417328 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.417335 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.417343 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.417350 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.417358 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.417365 | orchestrator | 2025-09-18 00:29:36.417373 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-18 00:29:36.417381 | orchestrator | Thursday 18 September 2025 00:29:22 +0000 (0:00:01.461) 0:06:32.383 **** 2025-09-18 00:29:36.417389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:29:36.417397 | orchestrator | 2025-09-18 00:29:36.417405 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-18 00:29:36.417412 | orchestrator | Thursday 18 September 2025 00:29:23 +0000 (0:00:01.085) 0:06:33.469 **** 2025-09-18 00:29:36.417420 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417428 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.417435 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.417443 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.417451 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417477 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417485 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417492 | orchestrator | 2025-09-18 00:29:36.417500 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-18 00:29:36.417507 | orchestrator | Thursday 18 September 2025 00:29:25 +0000 (0:00:01.498) 0:06:34.967 **** 2025-09-18 00:29:36.417515 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417523 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.417542 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.417550 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.417558 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417565 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417573 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417580 | orchestrator | 2025-09-18 00:29:36.417588 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-18 00:29:36.417595 | orchestrator | Thursday 18 September 2025 00:29:26 +0000 (0:00:01.181) 0:06:36.149 **** 2025-09-18 00:29:36.417603 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417610 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.417618 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.417625 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.417633 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417640 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417647 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417655 | orchestrator | 2025-09-18 00:29:36.417663 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-18 00:29:36.417676 | orchestrator | Thursday 18 September 2025 00:29:27 +0000 (0:00:01.115) 0:06:37.265 **** 2025-09-18 00:29:36.417683 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.417689 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:29:36.417696 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:29:36.417702 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:29:36.417709 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417716 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417722 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417729 | orchestrator | 2025-09-18 00:29:36.417736 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-18 00:29:36.417742 | orchestrator | Thursday 18 September 2025 00:29:28 +0000 (0:00:01.161) 0:06:38.427 **** 2025-09-18 00:29:36.417749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:29:36.417756 | orchestrator | 2025-09-18 00:29:36.417763 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417770 | orchestrator | Thursday 18 September 2025 00:29:29 +0000 (0:00:01.055) 0:06:39.482 **** 2025-09-18 00:29:36.417776 | orchestrator | 2025-09-18 00:29:36.417783 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417789 | orchestrator | Thursday 18 September 2025 00:29:29 +0000 (0:00:00.037) 0:06:39.520 **** 2025-09-18 00:29:36.417796 | orchestrator | 2025-09-18 00:29:36.417803 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417810 | orchestrator | Thursday 18 September 2025 00:29:29 +0000 (0:00:00.039) 0:06:39.560 **** 2025-09-18 00:29:36.417816 | orchestrator | 2025-09-18 00:29:36.417823 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417830 | orchestrator | Thursday 18 September 2025 00:29:29 +0000 (0:00:00.047) 0:06:39.607 **** 2025-09-18 00:29:36.417837 | orchestrator | 2025-09-18 00:29:36.417843 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417850 | orchestrator | Thursday 18 September 2025 00:29:29 +0000 (0:00:00.037) 0:06:39.644 **** 2025-09-18 00:29:36.417857 | orchestrator | 2025-09-18 00:29:36.417863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417870 | orchestrator | Thursday 18 September 2025 00:29:30 +0000 (0:00:00.038) 0:06:39.683 **** 2025-09-18 00:29:36.417876 | orchestrator | 2025-09-18 00:29:36.417883 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 00:29:36.417890 | orchestrator | Thursday 18 September 2025 00:29:30 +0000 (0:00:00.043) 0:06:39.727 **** 2025-09-18 00:29:36.417896 | orchestrator | 2025-09-18 00:29:36.417903 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 00:29:36.417909 | orchestrator | Thursday 18 September 2025 00:29:30 +0000 (0:00:00.038) 0:06:39.765 **** 2025-09-18 00:29:36.417916 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:29:36.417923 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:29:36.417930 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:29:36.417936 | orchestrator | 2025-09-18 00:29:36.417943 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-18 00:29:36.417949 | orchestrator | Thursday 18 September 2025 00:29:31 +0000 (0:00:01.199) 0:06:40.965 **** 2025-09-18 00:29:36.417956 | orchestrator | changed: [testbed-manager] 2025-09-18 00:29:36.417963 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.417970 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.417976 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.417983 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.417989 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.418002 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.418009 | orchestrator | 2025-09-18 00:29:36.418059 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-18 00:29:36.418072 | orchestrator | Thursday 18 September 2025 00:29:32 +0000 (0:00:01.352) 0:06:42.318 **** 2025-09-18 00:29:36.418079 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:29:36.418085 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.418092 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:29:36.418099 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.418105 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:29:36.418112 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.418118 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:29:36.418125 | orchestrator | 2025-09-18 00:29:36.418132 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-18 00:29:36.418138 | orchestrator | Thursday 18 September 2025 00:29:35 +0000 (0:00:02.568) 0:06:44.886 **** 2025-09-18 00:29:36.418145 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:29:36.418151 | orchestrator | 2025-09-18 00:29:36.418158 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-18 00:29:36.418164 | orchestrator | Thursday 18 September 2025 00:29:35 +0000 (0:00:00.129) 0:06:45.016 **** 2025-09-18 00:29:36.418171 | orchestrator | ok: [testbed-manager] 2025-09-18 00:29:36.418178 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:29:36.418184 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:29:36.418191 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:29:36.418203 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:03.499805 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:03.499954 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:03.499983 | orchestrator | 2025-09-18 00:30:03.500001 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-18 00:30:03.500014 | orchestrator | Thursday 18 September 2025 00:29:36 +0000 (0:00:01.063) 0:06:46.079 **** 2025-09-18 00:30:03.500027 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.500038 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.500068 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.500080 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.500091 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.500101 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.500112 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.500123 | orchestrator | 2025-09-18 00:30:03.500134 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-18 00:30:03.500156 | orchestrator | Thursday 18 September 2025 00:29:36 +0000 (0:00:00.525) 0:06:46.604 **** 2025-09-18 00:30:03.500168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:30:03.500182 | orchestrator | 2025-09-18 00:30:03.500193 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-18 00:30:03.500205 | orchestrator | Thursday 18 September 2025 00:29:38 +0000 (0:00:01.130) 0:06:47.734 **** 2025-09-18 00:30:03.500216 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.500228 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:03.500239 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:03.500250 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:03.500260 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:03.500271 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:03.500282 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:03.500292 | orchestrator | 2025-09-18 00:30:03.500303 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-18 00:30:03.500314 | orchestrator | Thursday 18 September 2025 00:29:38 +0000 (0:00:00.888) 0:06:48.623 **** 2025-09-18 00:30:03.500326 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-18 00:30:03.500339 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-18 00:30:03.500352 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-18 00:30:03.500406 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-18 00:30:03.500421 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-18 00:30:03.500466 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-18 00:30:03.500487 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-18 00:30:03.500508 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-18 00:30:03.500527 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-18 00:30:03.500544 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-18 00:30:03.500568 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-18 00:30:03.500593 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-18 00:30:03.500612 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-18 00:30:03.500630 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-18 00:30:03.500649 | orchestrator | 2025-09-18 00:30:03.500670 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-18 00:30:03.500688 | orchestrator | Thursday 18 September 2025 00:29:41 +0000 (0:00:02.608) 0:06:51.232 **** 2025-09-18 00:30:03.500706 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.500719 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.500730 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.500741 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.500751 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.500762 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.500773 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.500783 | orchestrator | 2025-09-18 00:30:03.500794 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-18 00:30:03.500805 | orchestrator | Thursday 18 September 2025 00:29:42 +0000 (0:00:00.511) 0:06:51.744 **** 2025-09-18 00:30:03.500818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:30:03.500830 | orchestrator | 2025-09-18 00:30:03.500841 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-18 00:30:03.500852 | orchestrator | Thursday 18 September 2025 00:29:43 +0000 (0:00:01.000) 0:06:52.745 **** 2025-09-18 00:30:03.500863 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.500874 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:03.500885 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:03.500895 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:03.500906 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:03.500916 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:03.500927 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:03.500937 | orchestrator | 2025-09-18 00:30:03.500948 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-18 00:30:03.500959 | orchestrator | Thursday 18 September 2025 00:29:43 +0000 (0:00:00.868) 0:06:53.613 **** 2025-09-18 00:30:03.500970 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.500981 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:03.500991 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:03.501002 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:03.501013 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:03.501023 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:03.501034 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:03.501044 | orchestrator | 2025-09-18 00:30:03.501055 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-18 00:30:03.501085 | orchestrator | Thursday 18 September 2025 00:29:44 +0000 (0:00:00.857) 0:06:54.471 **** 2025-09-18 00:30:03.501097 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.501108 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.501119 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.501130 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.501161 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.501172 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.501182 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.501193 | orchestrator | 2025-09-18 00:30:03.501204 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-18 00:30:03.501215 | orchestrator | Thursday 18 September 2025 00:29:45 +0000 (0:00:00.576) 0:06:55.047 **** 2025-09-18 00:30:03.501225 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501236 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:03.501247 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:03.501257 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:03.501268 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:03.501278 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:03.501289 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:03.501299 | orchestrator | 2025-09-18 00:30:03.501310 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-18 00:30:03.501321 | orchestrator | Thursday 18 September 2025 00:29:47 +0000 (0:00:02.031) 0:06:57.079 **** 2025-09-18 00:30:03.501332 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.501342 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.501353 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.501364 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.501375 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.501385 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.501396 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.501407 | orchestrator | 2025-09-18 00:30:03.501418 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-18 00:30:03.501428 | orchestrator | Thursday 18 September 2025 00:29:47 +0000 (0:00:00.520) 0:06:57.600 **** 2025-09-18 00:30:03.501467 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501478 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:03.501489 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:03.501500 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:03.501510 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:03.501521 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:03.501531 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:03.501542 | orchestrator | 2025-09-18 00:30:03.501553 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-18 00:30:03.501571 | orchestrator | Thursday 18 September 2025 00:29:55 +0000 (0:00:07.907) 0:07:05.507 **** 2025-09-18 00:30:03.501583 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501593 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:03.501604 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:03.501615 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:03.501626 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:03.501636 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:03.501647 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:03.501664 | orchestrator | 2025-09-18 00:30:03.501683 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-18 00:30:03.501701 | orchestrator | Thursday 18 September 2025 00:29:57 +0000 (0:00:01.367) 0:07:06.875 **** 2025-09-18 00:30:03.501718 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501736 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:03.501754 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:03.501772 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:03.501791 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:03.501803 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:03.501813 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:03.501824 | orchestrator | 2025-09-18 00:30:03.501835 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-18 00:30:03.501846 | orchestrator | Thursday 18 September 2025 00:29:59 +0000 (0:00:01.857) 0:07:08.732 **** 2025-09-18 00:30:03.501857 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501877 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:03.501888 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:03.501898 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:03.501909 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:03.501920 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:03.501930 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:03.501941 | orchestrator | 2025-09-18 00:30:03.501952 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 00:30:03.501962 | orchestrator | Thursday 18 September 2025 00:30:00 +0000 (0:00:01.938) 0:07:10.670 **** 2025-09-18 00:30:03.501973 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:03.501984 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:03.501995 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:03.502005 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:03.502016 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:03.502027 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:03.502037 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:03.502048 | orchestrator | 2025-09-18 00:30:03.502059 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 00:30:03.502070 | orchestrator | Thursday 18 September 2025 00:30:01 +0000 (0:00:00.916) 0:07:11.587 **** 2025-09-18 00:30:03.502080 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.502156 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.502168 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.502179 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.502190 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.502201 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.502212 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.502223 | orchestrator | 2025-09-18 00:30:03.502234 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-18 00:30:03.502245 | orchestrator | Thursday 18 September 2025 00:30:02 +0000 (0:00:01.048) 0:07:12.636 **** 2025-09-18 00:30:03.502256 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:03.502266 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:03.502277 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:03.502288 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:03.502299 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:03.502310 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:03.502321 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:03.502331 | orchestrator | 2025-09-18 00:30:03.502353 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-18 00:30:36.346526 | orchestrator | Thursday 18 September 2025 00:30:03 +0000 (0:00:00.526) 0:07:13.162 **** 2025-09-18 00:30:36.346645 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.346661 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.346672 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.346683 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.346693 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.346704 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.346716 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.346727 | orchestrator | 2025-09-18 00:30:36.346739 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-18 00:30:36.346750 | orchestrator | Thursday 18 September 2025 00:30:04 +0000 (0:00:00.553) 0:07:13.715 **** 2025-09-18 00:30:36.346761 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.346772 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.346783 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.346793 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.346804 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.346814 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.346825 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.346836 | orchestrator | 2025-09-18 00:30:36.346847 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-18 00:30:36.346858 | orchestrator | Thursday 18 September 2025 00:30:04 +0000 (0:00:00.526) 0:07:14.242 **** 2025-09-18 00:30:36.346894 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.346906 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.346916 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.346927 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.346937 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.346948 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.346958 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.346969 | orchestrator | 2025-09-18 00:30:36.346980 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-18 00:30:36.346990 | orchestrator | Thursday 18 September 2025 00:30:05 +0000 (0:00:00.518) 0:07:14.760 **** 2025-09-18 00:30:36.347001 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.347012 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.347031 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.347050 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.347070 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.347088 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.347107 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.347125 | orchestrator | 2025-09-18 00:30:36.347143 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-18 00:30:36.347161 | orchestrator | Thursday 18 September 2025 00:30:10 +0000 (0:00:05.837) 0:07:20.598 **** 2025-09-18 00:30:36.347199 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:36.347220 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:36.347239 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:36.347258 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:36.347276 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:36.347297 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:36.347318 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:36.347338 | orchestrator | 2025-09-18 00:30:36.347354 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-18 00:30:36.347366 | orchestrator | Thursday 18 September 2025 00:30:11 +0000 (0:00:00.479) 0:07:21.077 **** 2025-09-18 00:30:36.347381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:30:36.347394 | orchestrator | 2025-09-18 00:30:36.347405 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-18 00:30:36.347440 | orchestrator | Thursday 18 September 2025 00:30:12 +0000 (0:00:00.712) 0:07:21.789 **** 2025-09-18 00:30:36.347452 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.347462 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.347473 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.347484 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.347495 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.347505 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.347516 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.347526 | orchestrator | 2025-09-18 00:30:36.347537 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-18 00:30:36.347548 | orchestrator | Thursday 18 September 2025 00:30:14 +0000 (0:00:02.012) 0:07:23.802 **** 2025-09-18 00:30:36.347559 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.347569 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.347580 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.347590 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.347601 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.347611 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.347622 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.347632 | orchestrator | 2025-09-18 00:30:36.347643 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-18 00:30:36.347653 | orchestrator | Thursday 18 September 2025 00:30:15 +0000 (0:00:01.109) 0:07:24.912 **** 2025-09-18 00:30:36.347664 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.347675 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.347696 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.347707 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.347718 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.347728 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.347739 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.347749 | orchestrator | 2025-09-18 00:30:36.347760 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-18 00:30:36.347771 | orchestrator | Thursday 18 September 2025 00:30:16 +0000 (0:00:00.894) 0:07:25.807 **** 2025-09-18 00:30:36.347782 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347794 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347805 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347835 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347846 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347857 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347868 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 00:30:36.347879 | orchestrator | 2025-09-18 00:30:36.347890 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-18 00:30:36.347901 | orchestrator | Thursday 18 September 2025 00:30:17 +0000 (0:00:01.715) 0:07:27.522 **** 2025-09-18 00:30:36.347912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:30:36.347924 | orchestrator | 2025-09-18 00:30:36.347935 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-18 00:30:36.347945 | orchestrator | Thursday 18 September 2025 00:30:18 +0000 (0:00:00.988) 0:07:28.511 **** 2025-09-18 00:30:36.347956 | orchestrator | changed: [testbed-manager] 2025-09-18 00:30:36.347967 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:36.347978 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:36.347988 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:36.347999 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:36.348010 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:36.348020 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:36.348031 | orchestrator | 2025-09-18 00:30:36.348042 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-18 00:30:36.348053 | orchestrator | Thursday 18 September 2025 00:30:28 +0000 (0:00:09.226) 0:07:37.737 **** 2025-09-18 00:30:36.348063 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.348080 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.348091 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.348102 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.348112 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.348123 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.348133 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.348144 | orchestrator | 2025-09-18 00:30:36.348155 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-18 00:30:36.348165 | orchestrator | Thursday 18 September 2025 00:30:30 +0000 (0:00:02.023) 0:07:39.761 **** 2025-09-18 00:30:36.348176 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.348187 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.348204 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.348214 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.348225 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.348235 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.348245 | orchestrator | 2025-09-18 00:30:36.348256 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-18 00:30:36.348267 | orchestrator | Thursday 18 September 2025 00:30:31 +0000 (0:00:01.301) 0:07:41.062 **** 2025-09-18 00:30:36.348277 | orchestrator | changed: [testbed-manager] 2025-09-18 00:30:36.348288 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:36.348299 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:36.348309 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:36.348320 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:36.348330 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:36.348341 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:36.348351 | orchestrator | 2025-09-18 00:30:36.348362 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-18 00:30:36.348373 | orchestrator | 2025-09-18 00:30:36.348383 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-18 00:30:36.348394 | orchestrator | Thursday 18 September 2025 00:30:32 +0000 (0:00:01.313) 0:07:42.375 **** 2025-09-18 00:30:36.348405 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:30:36.348431 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:30:36.348442 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:30:36.348453 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:30:36.348463 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:30:36.348474 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:30:36.348484 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:30:36.348495 | orchestrator | 2025-09-18 00:30:36.348505 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-18 00:30:36.348516 | orchestrator | 2025-09-18 00:30:36.348526 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-18 00:30:36.348537 | orchestrator | Thursday 18 September 2025 00:30:33 +0000 (0:00:00.517) 0:07:42.893 **** 2025-09-18 00:30:36.348547 | orchestrator | changed: [testbed-manager] 2025-09-18 00:30:36.348558 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:30:36.348568 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:30:36.348579 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:30:36.348589 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:30:36.348600 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:30:36.348610 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:30:36.348621 | orchestrator | 2025-09-18 00:30:36.348631 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-18 00:30:36.348642 | orchestrator | Thursday 18 September 2025 00:30:34 +0000 (0:00:01.341) 0:07:44.235 **** 2025-09-18 00:30:36.348653 | orchestrator | ok: [testbed-manager] 2025-09-18 00:30:36.348663 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:30:36.348674 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:30:36.348684 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:30:36.348695 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:30:36.348705 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:30:36.348716 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:30:36.348726 | orchestrator | 2025-09-18 00:30:36.348737 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-18 00:30:36.348755 | orchestrator | Thursday 18 September 2025 00:30:36 +0000 (0:00:01.772) 0:07:46.008 **** 2025-09-18 00:31:00.143802 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:31:00.143930 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:31:00.143947 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:31:00.143959 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:31:00.143971 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:31:00.143982 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:31:00.143993 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:31:00.144005 | orchestrator | 2025-09-18 00:31:00.144043 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-18 00:31:00.144055 | orchestrator | Thursday 18 September 2025 00:30:36 +0000 (0:00:00.503) 0:07:46.511 **** 2025-09-18 00:31:00.144066 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:31:00.144079 | orchestrator | 2025-09-18 00:31:00.144090 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-18 00:31:00.144100 | orchestrator | Thursday 18 September 2025 00:30:37 +0000 (0:00:01.053) 0:07:47.565 **** 2025-09-18 00:31:00.144114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:31:00.144128 | orchestrator | 2025-09-18 00:31:00.144139 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-18 00:31:00.144150 | orchestrator | Thursday 18 September 2025 00:30:38 +0000 (0:00:00.835) 0:07:48.401 **** 2025-09-18 00:31:00.144160 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144171 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144182 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144193 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144203 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144214 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144225 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144235 | orchestrator | 2025-09-18 00:31:00.144246 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-18 00:31:00.144257 | orchestrator | Thursday 18 September 2025 00:30:47 +0000 (0:00:08.287) 0:07:56.689 **** 2025-09-18 00:31:00.144268 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144279 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144289 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144300 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144311 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144321 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144333 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144347 | orchestrator | 2025-09-18 00:31:00.144359 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-18 00:31:00.144372 | orchestrator | Thursday 18 September 2025 00:30:47 +0000 (0:00:00.827) 0:07:57.516 **** 2025-09-18 00:31:00.144385 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144397 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144437 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144450 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144462 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144475 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144487 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144499 | orchestrator | 2025-09-18 00:31:00.144512 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-18 00:31:00.144524 | orchestrator | Thursday 18 September 2025 00:30:49 +0000 (0:00:01.596) 0:07:59.113 **** 2025-09-18 00:31:00.144537 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144550 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144562 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144574 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144586 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144599 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144612 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144625 | orchestrator | 2025-09-18 00:31:00.144637 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-18 00:31:00.144650 | orchestrator | Thursday 18 September 2025 00:30:51 +0000 (0:00:01.764) 0:08:00.877 **** 2025-09-18 00:31:00.144663 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144683 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144695 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144706 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144717 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144727 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144737 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144748 | orchestrator | 2025-09-18 00:31:00.144759 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-18 00:31:00.144770 | orchestrator | Thursday 18 September 2025 00:30:52 +0000 (0:00:01.268) 0:08:02.146 **** 2025-09-18 00:31:00.144781 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.144791 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.144802 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.144812 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.144823 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.144833 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.144844 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.144855 | orchestrator | 2025-09-18 00:31:00.144865 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-18 00:31:00.144876 | orchestrator | 2025-09-18 00:31:00.144887 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-18 00:31:00.144944 | orchestrator | Thursday 18 September 2025 00:30:53 +0000 (0:00:01.399) 0:08:03.546 **** 2025-09-18 00:31:00.144956 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:31:00.144967 | orchestrator | 2025-09-18 00:31:00.144978 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-18 00:31:00.145006 | orchestrator | Thursday 18 September 2025 00:30:54 +0000 (0:00:00.836) 0:08:04.382 **** 2025-09-18 00:31:00.145017 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:00.145030 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:00.145041 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:00.145052 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:00.145063 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:00.145074 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:00.145085 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:00.145095 | orchestrator | 2025-09-18 00:31:00.145106 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-18 00:31:00.145118 | orchestrator | Thursday 18 September 2025 00:30:55 +0000 (0:00:00.828) 0:08:05.211 **** 2025-09-18 00:31:00.145128 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.145140 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.145150 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.145161 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.145172 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.145183 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.145194 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.145205 | orchestrator | 2025-09-18 00:31:00.145215 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-18 00:31:00.145226 | orchestrator | Thursday 18 September 2025 00:30:56 +0000 (0:00:01.325) 0:08:06.536 **** 2025-09-18 00:31:00.145237 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:31:00.145248 | orchestrator | 2025-09-18 00:31:00.145259 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-18 00:31:00.145270 | orchestrator | Thursday 18 September 2025 00:30:57 +0000 (0:00:00.866) 0:08:07.403 **** 2025-09-18 00:31:00.145281 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:00.145292 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:00.145303 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:00.145314 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:00.145325 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:00.145342 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:00.145353 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:00.145364 | orchestrator | 2025-09-18 00:31:00.145375 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-18 00:31:00.145386 | orchestrator | Thursday 18 September 2025 00:30:58 +0000 (0:00:00.887) 0:08:08.291 **** 2025-09-18 00:31:00.145397 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:00.145446 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:00.145458 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:00.145469 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:00.145480 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:00.145490 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:00.145501 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:00.145512 | orchestrator | 2025-09-18 00:31:00.145523 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:31:00.145535 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-18 00:31:00.145547 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 00:31:00.145558 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 00:31:00.145569 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 00:31:00.145580 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-18 00:31:00.145591 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 00:31:00.145602 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 00:31:00.145613 | orchestrator | 2025-09-18 00:31:00.145624 | orchestrator | 2025-09-18 00:31:00.145634 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:31:00.145645 | orchestrator | Thursday 18 September 2025 00:31:00 +0000 (0:00:01.489) 0:08:09.781 **** 2025-09-18 00:31:00.145657 | orchestrator | =============================================================================== 2025-09-18 00:31:00.145667 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.92s 2025-09-18 00:31:00.145679 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.98s 2025-09-18 00:31:00.145689 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.79s 2025-09-18 00:31:00.145700 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.62s 2025-09-18 00:31:00.145711 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.72s 2025-09-18 00:31:00.145722 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.40s 2025-09-18 00:31:00.145733 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.06s 2025-09-18 00:31:00.145744 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.97s 2025-09-18 00:31:00.145754 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.36s 2025-09-18 00:31:00.145765 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.23s 2025-09-18 00:31:00.145783 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.97s 2025-09-18 00:31:00.635934 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.29s 2025-09-18 00:31:00.636032 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.00s 2025-09-18 00:31:00.636074 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.91s 2025-09-18 00:31:00.636087 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.76s 2025-09-18 00:31:00.636097 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.63s 2025-09-18 00:31:00.636108 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.47s 2025-09-18 00:31:00.636119 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.11s 2025-09-18 00:31:00.636130 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.99s 2025-09-18 00:31:00.636141 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.84s 2025-09-18 00:31:00.969173 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-18 00:31:00.969241 | orchestrator | + osism apply network 2025-09-18 00:31:13.552380 | orchestrator | 2025-09-18 00:31:13 | INFO  | Task 5dbf8f24-670b-4811-9fd6-2fab6a1a2814 (network) was prepared for execution. 2025-09-18 00:31:13.552584 | orchestrator | 2025-09-18 00:31:13 | INFO  | It takes a moment until task 5dbf8f24-670b-4811-9fd6-2fab6a1a2814 (network) has been started and output is visible here. 2025-09-18 00:31:42.571781 | orchestrator | 2025-09-18 00:31:42.571899 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-18 00:31:42.571916 | orchestrator | 2025-09-18 00:31:42.571928 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-18 00:31:42.571940 | orchestrator | Thursday 18 September 2025 00:31:17 +0000 (0:00:00.293) 0:00:00.293 **** 2025-09-18 00:31:42.571951 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.571963 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.571974 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.571985 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.571997 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.572007 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.572018 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.572029 | orchestrator | 2025-09-18 00:31:42.572039 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-18 00:31:42.572050 | orchestrator | Thursday 18 September 2025 00:31:18 +0000 (0:00:00.757) 0:00:01.051 **** 2025-09-18 00:31:42.572063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:31:42.572076 | orchestrator | 2025-09-18 00:31:42.572088 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-18 00:31:42.572098 | orchestrator | Thursday 18 September 2025 00:31:19 +0000 (0:00:01.230) 0:00:02.282 **** 2025-09-18 00:31:42.572109 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.572120 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.572130 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.572141 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.572151 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.572162 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.572173 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.572183 | orchestrator | 2025-09-18 00:31:42.572194 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-18 00:31:42.572205 | orchestrator | Thursday 18 September 2025 00:31:21 +0000 (0:00:01.741) 0:00:04.024 **** 2025-09-18 00:31:42.572215 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.572226 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.572236 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.572247 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.572257 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.572268 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.572279 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.572289 | orchestrator | 2025-09-18 00:31:42.572302 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-18 00:31:42.572339 | orchestrator | Thursday 18 September 2025 00:31:23 +0000 (0:00:01.835) 0:00:05.859 **** 2025-09-18 00:31:42.572353 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-18 00:31:42.572366 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-18 00:31:42.572377 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-18 00:31:42.572413 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-18 00:31:42.572424 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-18 00:31:42.572435 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-18 00:31:42.572445 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-18 00:31:42.572456 | orchestrator | 2025-09-18 00:31:42.572467 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-18 00:31:42.572477 | orchestrator | Thursday 18 September 2025 00:31:24 +0000 (0:00:01.066) 0:00:06.925 **** 2025-09-18 00:31:42.572488 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:31:42.572500 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:31:42.572510 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 00:31:42.572521 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 00:31:42.572531 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 00:31:42.572542 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 00:31:42.572552 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 00:31:42.572563 | orchestrator | 2025-09-18 00:31:42.572573 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-18 00:31:42.572584 | orchestrator | Thursday 18 September 2025 00:31:27 +0000 (0:00:03.387) 0:00:10.313 **** 2025-09-18 00:31:42.572595 | orchestrator | changed: [testbed-manager] 2025-09-18 00:31:42.572606 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:42.572616 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:42.572627 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:42.572637 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:42.572648 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:42.572658 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:42.572669 | orchestrator | 2025-09-18 00:31:42.572679 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-18 00:31:42.572690 | orchestrator | Thursday 18 September 2025 00:31:29 +0000 (0:00:01.477) 0:00:11.790 **** 2025-09-18 00:31:42.572700 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:31:42.572711 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:31:42.572721 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 00:31:42.572732 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 00:31:42.572742 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 00:31:42.572753 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 00:31:42.572763 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 00:31:42.572774 | orchestrator | 2025-09-18 00:31:42.572785 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-18 00:31:42.572795 | orchestrator | Thursday 18 September 2025 00:31:31 +0000 (0:00:01.991) 0:00:13.782 **** 2025-09-18 00:31:42.572806 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.572816 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.572827 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.572837 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.572848 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.572858 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.572869 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.572879 | orchestrator | 2025-09-18 00:31:42.572890 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-18 00:31:42.572917 | orchestrator | Thursday 18 September 2025 00:31:32 +0000 (0:00:01.196) 0:00:14.978 **** 2025-09-18 00:31:42.572929 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:31:42.572939 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:31:42.572950 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:31:42.572969 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:31:42.572980 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:31:42.572991 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:31:42.573002 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:31:42.573012 | orchestrator | 2025-09-18 00:31:42.573023 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-18 00:31:42.573048 | orchestrator | Thursday 18 September 2025 00:31:33 +0000 (0:00:00.685) 0:00:15.664 **** 2025-09-18 00:31:42.573060 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.573086 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.573097 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.573107 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.573128 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.573139 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.573150 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.573160 | orchestrator | 2025-09-18 00:31:42.573171 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-18 00:31:42.573182 | orchestrator | Thursday 18 September 2025 00:31:35 +0000 (0:00:02.205) 0:00:17.869 **** 2025-09-18 00:31:42.573192 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:31:42.573203 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:31:42.573214 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:31:42.573224 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:31:42.573235 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:31:42.573245 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:31:42.573257 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-18 00:31:42.573269 | orchestrator | 2025-09-18 00:31:42.573280 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-18 00:31:42.573291 | orchestrator | Thursday 18 September 2025 00:31:36 +0000 (0:00:00.888) 0:00:18.758 **** 2025-09-18 00:31:42.573301 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.573312 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:31:42.573322 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:31:42.573333 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:31:42.573343 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:31:42.573354 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:31:42.573365 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:31:42.573375 | orchestrator | 2025-09-18 00:31:42.573418 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-18 00:31:42.573430 | orchestrator | Thursday 18 September 2025 00:31:38 +0000 (0:00:01.720) 0:00:20.478 **** 2025-09-18 00:31:42.573441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:31:42.573454 | orchestrator | 2025-09-18 00:31:42.573464 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-18 00:31:42.573475 | orchestrator | Thursday 18 September 2025 00:31:39 +0000 (0:00:01.301) 0:00:21.779 **** 2025-09-18 00:31:42.573486 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.573496 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.573507 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.573517 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.573528 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.573538 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.573549 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.573559 | orchestrator | 2025-09-18 00:31:42.573570 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-18 00:31:42.573581 | orchestrator | Thursday 18 September 2025 00:31:40 +0000 (0:00:00.987) 0:00:22.766 **** 2025-09-18 00:31:42.573591 | orchestrator | ok: [testbed-manager] 2025-09-18 00:31:42.573602 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:31:42.573613 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:31:42.573630 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:31:42.573641 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:31:42.573651 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:31:42.573662 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:31:42.573672 | orchestrator | 2025-09-18 00:31:42.573683 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-18 00:31:42.573694 | orchestrator | Thursday 18 September 2025 00:31:41 +0000 (0:00:00.928) 0:00:23.695 **** 2025-09-18 00:31:42.573704 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573715 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573725 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573736 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573747 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573757 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573767 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573778 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573789 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573799 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 00:31:42.573810 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573820 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573831 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573842 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 00:31:42.573852 | orchestrator | 2025-09-18 00:31:42.573870 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-18 00:31:57.839648 | orchestrator | Thursday 18 September 2025 00:31:42 +0000 (0:00:01.259) 0:00:24.955 **** 2025-09-18 00:31:57.839748 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:31:57.839765 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:31:57.839777 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:31:57.839788 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:31:57.839799 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:31:57.839809 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:31:57.839821 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:31:57.839832 | orchestrator | 2025-09-18 00:31:57.839857 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-18 00:31:57.839869 | orchestrator | Thursday 18 September 2025 00:31:43 +0000 (0:00:00.643) 0:00:25.598 **** 2025-09-18 00:31:57.839882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-18 00:31:57.839895 | orchestrator | 2025-09-18 00:31:57.839906 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-18 00:31:57.839917 | orchestrator | Thursday 18 September 2025 00:31:47 +0000 (0:00:04.597) 0:00:30.196 **** 2025-09-18 00:31:57.839930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.839941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.839953 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.839992 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840138 | orchestrator | 2025-09-18 00:31:57.840149 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-18 00:31:57.840161 | orchestrator | Thursday 18 September 2025 00:31:52 +0000 (0:00:04.954) 0:00:35.150 **** 2025-09-18 00:31:57.840172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840191 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840267 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-18 00:31:57.840293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:31:57.840342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:32:04.040341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-18 00:32:04.040519 | orchestrator | 2025-09-18 00:32:04.040535 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-18 00:32:04.040549 | orchestrator | Thursday 18 September 2025 00:31:57 +0000 (0:00:05.070) 0:00:40.221 **** 2025-09-18 00:32:04.040589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:32:04.040601 | orchestrator | 2025-09-18 00:32:04.040611 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-18 00:32:04.040621 | orchestrator | Thursday 18 September 2025 00:31:58 +0000 (0:00:01.094) 0:00:41.316 **** 2025-09-18 00:32:04.040631 | orchestrator | ok: [testbed-manager] 2025-09-18 00:32:04.040643 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:32:04.040653 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:32:04.040662 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:32:04.040672 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:32:04.040681 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:32:04.040691 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:32:04.040701 | orchestrator | 2025-09-18 00:32:04.040711 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-18 00:32:04.040721 | orchestrator | Thursday 18 September 2025 00:32:00 +0000 (0:00:01.175) 0:00:42.491 **** 2025-09-18 00:32:04.040731 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.040741 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.040751 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.040761 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.040770 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.040799 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.040810 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.040820 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.040829 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:32:04.040840 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.040850 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.040862 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.040874 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.040885 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:32:04.040897 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.040908 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.040919 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.040930 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.040942 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:32:04.040953 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.040965 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.040976 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.040988 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.040999 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:32:04.041011 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.041022 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.041034 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.041052 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.041064 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:32:04.041076 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:32:04.041088 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 00:32:04.041100 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 00:32:04.041111 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 00:32:04.041122 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 00:32:04.041134 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:32:04.041145 | orchestrator | 2025-09-18 00:32:04.041156 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-18 00:32:04.041187 | orchestrator | Thursday 18 September 2025 00:32:02 +0000 (0:00:02.095) 0:00:44.587 **** 2025-09-18 00:32:04.041199 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:32:04.041210 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:32:04.041222 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:32:04.041232 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:32:04.041241 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:32:04.041251 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:32:04.041266 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:32:04.041276 | orchestrator | 2025-09-18 00:32:04.041286 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-18 00:32:04.041295 | orchestrator | Thursday 18 September 2025 00:32:02 +0000 (0:00:00.679) 0:00:45.266 **** 2025-09-18 00:32:04.041305 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:32:04.041315 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:32:04.041324 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:32:04.041334 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:32:04.041344 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:32:04.041353 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:32:04.041363 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:32:04.041395 | orchestrator | 2025-09-18 00:32:04.041405 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:32:04.041417 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 00:32:04.041429 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041439 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041449 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041459 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041469 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041479 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 00:32:04.041488 | orchestrator | 2025-09-18 00:32:04.041498 | orchestrator | 2025-09-18 00:32:04.041508 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:32:04.041518 | orchestrator | Thursday 18 September 2025 00:32:03 +0000 (0:00:00.766) 0:00:46.032 **** 2025-09-18 00:32:04.041528 | orchestrator | =============================================================================== 2025-09-18 00:32:04.041545 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.07s 2025-09-18 00:32:04.041555 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.95s 2025-09-18 00:32:04.041564 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.60s 2025-09-18 00:32:04.041574 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.39s 2025-09-18 00:32:04.041583 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2025-09-18 00:32:04.041593 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.10s 2025-09-18 00:32:04.041603 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.99s 2025-09-18 00:32:04.041613 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.84s 2025-09-18 00:32:04.041622 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.74s 2025-09-18 00:32:04.041632 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2025-09-18 00:32:04.041641 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-09-18 00:32:04.041651 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-09-18 00:32:04.041661 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2025-09-18 00:32:04.041670 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.23s 2025-09-18 00:32:04.041680 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.20s 2025-09-18 00:32:04.041690 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2025-09-18 00:32:04.041699 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2025-09-18 00:32:04.041709 | orchestrator | osism.commons.network : Create required directories --------------------- 1.07s 2025-09-18 00:32:04.041719 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-09-18 00:32:04.041728 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.93s 2025-09-18 00:32:04.369822 | orchestrator | + osism apply wireguard 2025-09-18 00:32:16.433727 | orchestrator | 2025-09-18 00:32:16 | INFO  | Task a74020cc-6609-480c-9f09-c1a7083ad3eb (wireguard) was prepared for execution. 2025-09-18 00:32:16.433849 | orchestrator | 2025-09-18 00:32:16 | INFO  | It takes a moment until task a74020cc-6609-480c-9f09-c1a7083ad3eb (wireguard) has been started and output is visible here. 2025-09-18 00:32:35.431528 | orchestrator | 2025-09-18 00:32:35.431684 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-18 00:32:35.431701 | orchestrator | 2025-09-18 00:32:35.431713 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-18 00:32:35.431751 | orchestrator | Thursday 18 September 2025 00:32:20 +0000 (0:00:00.170) 0:00:00.170 **** 2025-09-18 00:32:35.431764 | orchestrator | ok: [testbed-manager] 2025-09-18 00:32:35.431776 | orchestrator | 2025-09-18 00:32:35.431787 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-18 00:32:35.431798 | orchestrator | Thursday 18 September 2025 00:32:21 +0000 (0:00:01.399) 0:00:01.570 **** 2025-09-18 00:32:35.431810 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.431822 | orchestrator | 2025-09-18 00:32:35.431832 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-18 00:32:35.431843 | orchestrator | Thursday 18 September 2025 00:32:27 +0000 (0:00:06.246) 0:00:07.816 **** 2025-09-18 00:32:35.431854 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.431865 | orchestrator | 2025-09-18 00:32:35.431876 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-18 00:32:35.431887 | orchestrator | Thursday 18 September 2025 00:32:28 +0000 (0:00:00.579) 0:00:08.395 **** 2025-09-18 00:32:35.431898 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.431939 | orchestrator | 2025-09-18 00:32:35.431950 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-18 00:32:35.431962 | orchestrator | Thursday 18 September 2025 00:32:28 +0000 (0:00:00.430) 0:00:08.826 **** 2025-09-18 00:32:35.431973 | orchestrator | ok: [testbed-manager] 2025-09-18 00:32:35.431985 | orchestrator | 2025-09-18 00:32:35.431998 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-18 00:32:35.432011 | orchestrator | Thursday 18 September 2025 00:32:29 +0000 (0:00:00.532) 0:00:09.359 **** 2025-09-18 00:32:35.432023 | orchestrator | ok: [testbed-manager] 2025-09-18 00:32:35.432035 | orchestrator | 2025-09-18 00:32:35.432046 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-18 00:32:35.432059 | orchestrator | Thursday 18 September 2025 00:32:29 +0000 (0:00:00.558) 0:00:09.917 **** 2025-09-18 00:32:35.432071 | orchestrator | ok: [testbed-manager] 2025-09-18 00:32:35.432084 | orchestrator | 2025-09-18 00:32:35.432096 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-18 00:32:35.432109 | orchestrator | Thursday 18 September 2025 00:32:30 +0000 (0:00:00.417) 0:00:10.334 **** 2025-09-18 00:32:35.432121 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.432134 | orchestrator | 2025-09-18 00:32:35.432146 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-18 00:32:35.432157 | orchestrator | Thursday 18 September 2025 00:32:31 +0000 (0:00:01.190) 0:00:11.525 **** 2025-09-18 00:32:35.432170 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 00:32:35.432183 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.432196 | orchestrator | 2025-09-18 00:32:35.432209 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-18 00:32:35.432221 | orchestrator | Thursday 18 September 2025 00:32:32 +0000 (0:00:00.939) 0:00:12.465 **** 2025-09-18 00:32:35.432233 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.432246 | orchestrator | 2025-09-18 00:32:35.432258 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-18 00:32:35.432271 | orchestrator | Thursday 18 September 2025 00:32:34 +0000 (0:00:01.688) 0:00:14.154 **** 2025-09-18 00:32:35.432283 | orchestrator | changed: [testbed-manager] 2025-09-18 00:32:35.432295 | orchestrator | 2025-09-18 00:32:35.432307 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:32:35.432321 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:32:35.432335 | orchestrator | 2025-09-18 00:32:35.432347 | orchestrator | 2025-09-18 00:32:35.432381 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:32:35.432392 | orchestrator | Thursday 18 September 2025 00:32:35 +0000 (0:00:00.968) 0:00:15.122 **** 2025-09-18 00:32:35.432403 | orchestrator | =============================================================================== 2025-09-18 00:32:35.432414 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.25s 2025-09-18 00:32:35.432425 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-18 00:32:35.432436 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.40s 2025-09-18 00:32:35.432447 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-18 00:32:35.432457 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-09-18 00:32:35.432468 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-09-18 00:32:35.432479 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-09-18 00:32:35.432490 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-09-18 00:32:35.432501 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-18 00:32:35.432511 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-09-18 00:32:35.432532 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-18 00:32:35.740843 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-18 00:32:35.770826 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-18 00:32:35.770948 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-18 00:32:35.854292 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 168 2025-09-18 00:32:35.868317 | orchestrator | + osism apply --environment custom workarounds 2025-09-18 00:32:37.806292 | orchestrator | 2025-09-18 00:32:37 | INFO  | Trying to run play workarounds in environment custom 2025-09-18 00:32:47.970558 | orchestrator | 2025-09-18 00:32:47 | INFO  | Task 864f77f9-a2cd-4012-9a6e-cc1dd21e222d (workarounds) was prepared for execution. 2025-09-18 00:32:47.970673 | orchestrator | 2025-09-18 00:32:47 | INFO  | It takes a moment until task 864f77f9-a2cd-4012-9a6e-cc1dd21e222d (workarounds) has been started and output is visible here. 2025-09-18 00:33:12.901627 | orchestrator | 2025-09-18 00:33:12.901744 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:33:12.901762 | orchestrator | 2025-09-18 00:33:12.901773 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-18 00:33:12.901785 | orchestrator | Thursday 18 September 2025 00:32:51 +0000 (0:00:00.136) 0:00:00.136 **** 2025-09-18 00:33:12.901796 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901807 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901818 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901829 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901840 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901850 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901861 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-18 00:33:12.901871 | orchestrator | 2025-09-18 00:33:12.901882 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-18 00:33:12.901893 | orchestrator | 2025-09-18 00:33:12.901903 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-18 00:33:12.901914 | orchestrator | Thursday 18 September 2025 00:32:52 +0000 (0:00:00.676) 0:00:00.813 **** 2025-09-18 00:33:12.901925 | orchestrator | ok: [testbed-manager] 2025-09-18 00:33:12.901937 | orchestrator | 2025-09-18 00:33:12.901948 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-18 00:33:12.901959 | orchestrator | 2025-09-18 00:33:12.901969 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-18 00:33:12.901980 | orchestrator | Thursday 18 September 2025 00:32:54 +0000 (0:00:02.142) 0:00:02.955 **** 2025-09-18 00:33:12.901991 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:33:12.902002 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:33:12.902013 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:33:12.902070 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:33:12.902082 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:33:12.902092 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:33:12.902103 | orchestrator | 2025-09-18 00:33:12.902115 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-18 00:33:12.902126 | orchestrator | 2025-09-18 00:33:12.902136 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-18 00:33:12.902147 | orchestrator | Thursday 18 September 2025 00:32:56 +0000 (0:00:01.802) 0:00:04.757 **** 2025-09-18 00:33:12.902159 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902171 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902208 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902220 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902232 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902245 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 00:33:12.902257 | orchestrator | 2025-09-18 00:33:12.902269 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-18 00:33:12.902281 | orchestrator | Thursday 18 September 2025 00:32:58 +0000 (0:00:01.553) 0:00:06.311 **** 2025-09-18 00:33:12.902294 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:33:12.902307 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:33:12.902319 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:33:12.902331 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:33:12.902364 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:33:12.902376 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:33:12.902389 | orchestrator | 2025-09-18 00:33:12.902401 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-18 00:33:12.902414 | orchestrator | Thursday 18 September 2025 00:33:02 +0000 (0:00:03.990) 0:00:10.301 **** 2025-09-18 00:33:12.902426 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:33:12.902438 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:33:12.902450 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:33:12.902463 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:33:12.902475 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:33:12.902487 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:33:12.902499 | orchestrator | 2025-09-18 00:33:12.902512 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-18 00:33:12.902525 | orchestrator | 2025-09-18 00:33:12.902536 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-18 00:33:12.902547 | orchestrator | Thursday 18 September 2025 00:33:02 +0000 (0:00:00.768) 0:00:11.069 **** 2025-09-18 00:33:12.902558 | orchestrator | changed: [testbed-manager] 2025-09-18 00:33:12.902568 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:33:12.902579 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:33:12.902589 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:33:12.902600 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:33:12.902610 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:33:12.902621 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:33:12.902631 | orchestrator | 2025-09-18 00:33:12.902642 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-18 00:33:12.902653 | orchestrator | Thursday 18 September 2025 00:33:04 +0000 (0:00:01.697) 0:00:12.767 **** 2025-09-18 00:33:12.902676 | orchestrator | changed: [testbed-manager] 2025-09-18 00:33:12.902688 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:33:12.902698 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:33:12.902709 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:33:12.902720 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:33:12.902730 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:33:12.902758 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:33:12.902770 | orchestrator | 2025-09-18 00:33:12.902781 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-18 00:33:12.902792 | orchestrator | Thursday 18 September 2025 00:33:06 +0000 (0:00:01.622) 0:00:14.390 **** 2025-09-18 00:33:12.902803 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:33:12.902814 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:33:12.902824 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:33:12.902850 | orchestrator | ok: [testbed-manager] 2025-09-18 00:33:12.902861 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:33:12.902891 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:33:12.902902 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:33:12.902912 | orchestrator | 2025-09-18 00:33:12.902923 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-18 00:33:12.902934 | orchestrator | Thursday 18 September 2025 00:33:07 +0000 (0:00:01.511) 0:00:15.901 **** 2025-09-18 00:33:12.902945 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:33:12.902955 | orchestrator | changed: [testbed-manager] 2025-09-18 00:33:12.902966 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:33:12.902977 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:33:12.902988 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:33:12.902998 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:33:12.903009 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:33:12.903020 | orchestrator | 2025-09-18 00:33:12.903030 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-18 00:33:12.903041 | orchestrator | Thursday 18 September 2025 00:33:09 +0000 (0:00:01.793) 0:00:17.695 **** 2025-09-18 00:33:12.903052 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:33:12.903063 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:33:12.903073 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:33:12.903084 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:33:12.903095 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:33:12.903105 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:33:12.903116 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:33:12.903127 | orchestrator | 2025-09-18 00:33:12.903138 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-18 00:33:12.903149 | orchestrator | 2025-09-18 00:33:12.903160 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-18 00:33:12.903171 | orchestrator | Thursday 18 September 2025 00:33:10 +0000 (0:00:00.621) 0:00:18.316 **** 2025-09-18 00:33:12.903182 | orchestrator | ok: [testbed-manager] 2025-09-18 00:33:12.903192 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:33:12.903203 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:33:12.903214 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:33:12.903225 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:33:12.903235 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:33:12.903246 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:33:12.903257 | orchestrator | 2025-09-18 00:33:12.903267 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:33:12.903280 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:33:12.903292 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903303 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903314 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903325 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903351 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903362 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:12.903373 | orchestrator | 2025-09-18 00:33:12.903384 | orchestrator | 2025-09-18 00:33:12.903395 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:33:12.903406 | orchestrator | Thursday 18 September 2025 00:33:12 +0000 (0:00:02.694) 0:00:21.011 **** 2025-09-18 00:33:12.903424 | orchestrator | =============================================================================== 2025-09-18 00:33:12.903435 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.99s 2025-09-18 00:33:12.903446 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2025-09-18 00:33:12.903457 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2025-09-18 00:33:12.903468 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-09-18 00:33:12.903478 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2025-09-18 00:33:12.903489 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-09-18 00:33:12.903499 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-09-18 00:33:12.903510 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-09-18 00:33:12.903526 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-09-18 00:33:12.903537 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-09-18 00:33:12.903548 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.68s 2025-09-18 00:33:12.903565 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-09-18 00:33:13.533815 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-18 00:33:25.628636 | orchestrator | 2025-09-18 00:33:25 | INFO  | Task 5cbe12b5-bc04-4290-888d-386b8ec21b98 (reboot) was prepared for execution. 2025-09-18 00:33:25.628748 | orchestrator | 2025-09-18 00:33:25 | INFO  | It takes a moment until task 5cbe12b5-bc04-4290-888d-386b8ec21b98 (reboot) has been started and output is visible here. 2025-09-18 00:33:35.114848 | orchestrator | 2025-09-18 00:33:35.114968 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.114985 | orchestrator | 2025-09-18 00:33:35.114997 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115009 | orchestrator | Thursday 18 September 2025 00:33:29 +0000 (0:00:00.159) 0:00:00.159 **** 2025-09-18 00:33:35.115020 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:33:35.115033 | orchestrator | 2025-09-18 00:33:35.115044 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115055 | orchestrator | Thursday 18 September 2025 00:33:29 +0000 (0:00:00.087) 0:00:00.247 **** 2025-09-18 00:33:35.115066 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:33:35.115076 | orchestrator | 2025-09-18 00:33:35.115087 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115098 | orchestrator | Thursday 18 September 2025 00:33:30 +0000 (0:00:00.877) 0:00:01.124 **** 2025-09-18 00:33:35.115109 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:33:35.115120 | orchestrator | 2025-09-18 00:33:35.115131 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.115142 | orchestrator | 2025-09-18 00:33:35.115153 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115164 | orchestrator | Thursday 18 September 2025 00:33:30 +0000 (0:00:00.136) 0:00:01.260 **** 2025-09-18 00:33:35.115174 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:33:35.115185 | orchestrator | 2025-09-18 00:33:35.115196 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115207 | orchestrator | Thursday 18 September 2025 00:33:30 +0000 (0:00:00.092) 0:00:01.353 **** 2025-09-18 00:33:35.115217 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:33:35.115228 | orchestrator | 2025-09-18 00:33:35.115239 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115250 | orchestrator | Thursday 18 September 2025 00:33:31 +0000 (0:00:00.645) 0:00:01.999 **** 2025-09-18 00:33:35.115261 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:33:35.115271 | orchestrator | 2025-09-18 00:33:35.115306 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.115318 | orchestrator | 2025-09-18 00:33:35.115358 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115370 | orchestrator | Thursday 18 September 2025 00:33:31 +0000 (0:00:00.123) 0:00:02.122 **** 2025-09-18 00:33:35.115381 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:33:35.115394 | orchestrator | 2025-09-18 00:33:35.115406 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115419 | orchestrator | Thursday 18 September 2025 00:33:31 +0000 (0:00:00.161) 0:00:02.284 **** 2025-09-18 00:33:35.115431 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:33:35.115443 | orchestrator | 2025-09-18 00:33:35.115456 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115468 | orchestrator | Thursday 18 September 2025 00:33:32 +0000 (0:00:00.654) 0:00:02.938 **** 2025-09-18 00:33:35.115480 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:33:35.115492 | orchestrator | 2025-09-18 00:33:35.115505 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.115517 | orchestrator | 2025-09-18 00:33:35.115529 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115541 | orchestrator | Thursday 18 September 2025 00:33:32 +0000 (0:00:00.118) 0:00:03.057 **** 2025-09-18 00:33:35.115554 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:33:35.115567 | orchestrator | 2025-09-18 00:33:35.115579 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115591 | orchestrator | Thursday 18 September 2025 00:33:32 +0000 (0:00:00.092) 0:00:03.149 **** 2025-09-18 00:33:35.115604 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:33:35.115615 | orchestrator | 2025-09-18 00:33:35.115628 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115641 | orchestrator | Thursday 18 September 2025 00:33:32 +0000 (0:00:00.653) 0:00:03.803 **** 2025-09-18 00:33:35.115653 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:33:35.115665 | orchestrator | 2025-09-18 00:33:35.115678 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.115690 | orchestrator | 2025-09-18 00:33:35.115702 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115714 | orchestrator | Thursday 18 September 2025 00:33:33 +0000 (0:00:00.105) 0:00:03.909 **** 2025-09-18 00:33:35.115727 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:33:35.115740 | orchestrator | 2025-09-18 00:33:35.115750 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115761 | orchestrator | Thursday 18 September 2025 00:33:33 +0000 (0:00:00.087) 0:00:03.997 **** 2025-09-18 00:33:35.115772 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:33:35.115782 | orchestrator | 2025-09-18 00:33:35.115793 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115804 | orchestrator | Thursday 18 September 2025 00:33:33 +0000 (0:00:00.691) 0:00:04.688 **** 2025-09-18 00:33:35.115814 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:33:35.115825 | orchestrator | 2025-09-18 00:33:35.115836 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 00:33:35.115847 | orchestrator | 2025-09-18 00:33:35.115857 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 00:33:35.115868 | orchestrator | Thursday 18 September 2025 00:33:33 +0000 (0:00:00.132) 0:00:04.820 **** 2025-09-18 00:33:35.115879 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:33:35.115889 | orchestrator | 2025-09-18 00:33:35.115900 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 00:33:35.115911 | orchestrator | Thursday 18 September 2025 00:33:34 +0000 (0:00:00.104) 0:00:04.924 **** 2025-09-18 00:33:35.115922 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:33:35.115932 | orchestrator | 2025-09-18 00:33:35.115943 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 00:33:35.115962 | orchestrator | Thursday 18 September 2025 00:33:34 +0000 (0:00:00.711) 0:00:05.636 **** 2025-09-18 00:33:35.115990 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:33:35.116001 | orchestrator | 2025-09-18 00:33:35.116012 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:33:35.116024 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116037 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116047 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116058 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116069 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116080 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:33:35.116090 | orchestrator | 2025-09-18 00:33:35.116101 | orchestrator | 2025-09-18 00:33:35.116112 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:33:35.116123 | orchestrator | Thursday 18 September 2025 00:33:34 +0000 (0:00:00.038) 0:00:05.675 **** 2025-09-18 00:33:35.116133 | orchestrator | =============================================================================== 2025-09-18 00:33:35.116144 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.23s 2025-09-18 00:33:35.116159 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-09-18 00:33:35.116170 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2025-09-18 00:33:35.431480 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-18 00:33:47.625573 | orchestrator | 2025-09-18 00:33:47 | INFO  | Task ce0ad16d-c8d6-47ff-b384-8459a08648de (wait-for-connection) was prepared for execution. 2025-09-18 00:33:47.625679 | orchestrator | 2025-09-18 00:33:47 | INFO  | It takes a moment until task ce0ad16d-c8d6-47ff-b384-8459a08648de (wait-for-connection) has been started and output is visible here. 2025-09-18 00:34:04.142107 | orchestrator | 2025-09-18 00:34:04.142222 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-18 00:34:04.142239 | orchestrator | 2025-09-18 00:34:04.142251 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-18 00:34:04.142263 | orchestrator | Thursday 18 September 2025 00:33:51 +0000 (0:00:00.237) 0:00:00.237 **** 2025-09-18 00:34:04.142274 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:34:04.142286 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:34:04.142298 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:34:04.142309 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:34:04.142369 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:34:04.142389 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:34:04.142409 | orchestrator | 2025-09-18 00:34:04.142428 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:34:04.142446 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142459 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142470 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142508 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142536 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142548 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:04.142558 | orchestrator | 2025-09-18 00:34:04.142569 | orchestrator | 2025-09-18 00:34:04.142580 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:34:04.142604 | orchestrator | Thursday 18 September 2025 00:34:03 +0000 (0:00:11.747) 0:00:11.985 **** 2025-09-18 00:34:04.142615 | orchestrator | =============================================================================== 2025-09-18 00:34:04.142626 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.75s 2025-09-18 00:34:04.477597 | orchestrator | + osism apply hddtemp 2025-09-18 00:34:16.733082 | orchestrator | 2025-09-18 00:34:16 | INFO  | Task 6413ac17-220d-4a44-98d5-1cfce65bcc99 (hddtemp) was prepared for execution. 2025-09-18 00:34:16.733202 | orchestrator | 2025-09-18 00:34:16 | INFO  | It takes a moment until task 6413ac17-220d-4a44-98d5-1cfce65bcc99 (hddtemp) has been started and output is visible here. 2025-09-18 00:34:45.130651 | orchestrator | 2025-09-18 00:34:45.130773 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-18 00:34:45.130790 | orchestrator | 2025-09-18 00:34:45.130802 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-18 00:34:45.130814 | orchestrator | Thursday 18 September 2025 00:34:20 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-18 00:34:45.130825 | orchestrator | ok: [testbed-manager] 2025-09-18 00:34:45.130838 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:34:45.130849 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:34:45.130860 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:34:45.130871 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:34:45.130882 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:34:45.130892 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:34:45.130903 | orchestrator | 2025-09-18 00:34:45.130914 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-18 00:34:45.130925 | orchestrator | Thursday 18 September 2025 00:34:21 +0000 (0:00:00.749) 0:00:01.028 **** 2025-09-18 00:34:45.130938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:34:45.130952 | orchestrator | 2025-09-18 00:34:45.130963 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-18 00:34:45.130974 | orchestrator | Thursday 18 September 2025 00:34:22 +0000 (0:00:01.297) 0:00:02.326 **** 2025-09-18 00:34:45.130984 | orchestrator | ok: [testbed-manager] 2025-09-18 00:34:45.130995 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:34:45.131006 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:34:45.131017 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:34:45.131028 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:34:45.131038 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:34:45.131049 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:34:45.131060 | orchestrator | 2025-09-18 00:34:45.131071 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-18 00:34:45.131082 | orchestrator | Thursday 18 September 2025 00:34:25 +0000 (0:00:02.133) 0:00:04.459 **** 2025-09-18 00:34:45.131093 | orchestrator | changed: [testbed-manager] 2025-09-18 00:34:45.131104 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:34:45.131115 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:34:45.131126 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:34:45.131137 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:34:45.131186 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:34:45.131200 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:34:45.131213 | orchestrator | 2025-09-18 00:34:45.131226 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-18 00:34:45.131238 | orchestrator | Thursday 18 September 2025 00:34:26 +0000 (0:00:01.216) 0:00:05.675 **** 2025-09-18 00:34:45.131251 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:34:45.131263 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:34:45.131275 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:34:45.131287 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:34:45.131323 | orchestrator | ok: [testbed-manager] 2025-09-18 00:34:45.131335 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:34:45.131347 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:34:45.131360 | orchestrator | 2025-09-18 00:34:45.131372 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-18 00:34:45.131385 | orchestrator | Thursday 18 September 2025 00:34:27 +0000 (0:00:01.173) 0:00:06.848 **** 2025-09-18 00:34:45.131397 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:34:45.131410 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:34:45.131422 | orchestrator | changed: [testbed-manager] 2025-09-18 00:34:45.131435 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:34:45.131447 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:34:45.131459 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:34:45.131472 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:34:45.131484 | orchestrator | 2025-09-18 00:34:45.131497 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-18 00:34:45.131509 | orchestrator | Thursday 18 September 2025 00:34:28 +0000 (0:00:00.881) 0:00:07.730 **** 2025-09-18 00:34:45.131522 | orchestrator | changed: [testbed-manager] 2025-09-18 00:34:45.131534 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:34:45.131547 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:34:45.131557 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:34:45.131568 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:34:45.131579 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:34:45.131589 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:34:45.131600 | orchestrator | 2025-09-18 00:34:45.131611 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-18 00:34:45.131622 | orchestrator | Thursday 18 September 2025 00:34:41 +0000 (0:00:13.071) 0:00:20.802 **** 2025-09-18 00:34:45.131633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:34:45.131644 | orchestrator | 2025-09-18 00:34:45.131655 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-18 00:34:45.131666 | orchestrator | Thursday 18 September 2025 00:34:42 +0000 (0:00:01.419) 0:00:22.221 **** 2025-09-18 00:34:45.131676 | orchestrator | changed: [testbed-manager] 2025-09-18 00:34:45.131702 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:34:45.131713 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:34:45.131723 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:34:45.131734 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:34:45.131745 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:34:45.131755 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:34:45.131766 | orchestrator | 2025-09-18 00:34:45.131777 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:34:45.131788 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:34:45.131818 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131831 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131850 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131861 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131872 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131883 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:34:45.131894 | orchestrator | 2025-09-18 00:34:45.131905 | orchestrator | 2025-09-18 00:34:45.131915 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:34:45.131926 | orchestrator | Thursday 18 September 2025 00:34:44 +0000 (0:00:01.943) 0:00:24.164 **** 2025-09-18 00:34:45.131937 | orchestrator | =============================================================================== 2025-09-18 00:34:45.131948 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.07s 2025-09-18 00:34:45.131958 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.13s 2025-09-18 00:34:45.131969 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2025-09-18 00:34:45.131979 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-09-18 00:34:45.131990 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.30s 2025-09-18 00:34:45.132001 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2025-09-18 00:34:45.132011 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2025-09-18 00:34:45.132022 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2025-09-18 00:34:45.132033 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-09-18 00:34:45.407108 | orchestrator | ++ semver latest 7.1.1 2025-09-18 00:34:45.456268 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 00:34:45.456412 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 00:34:45.456429 | orchestrator | + sudo systemctl restart manager.service 2025-09-18 00:34:59.084822 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-18 00:34:59.084940 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-18 00:34:59.084957 | orchestrator | + local max_attempts=60 2025-09-18 00:34:59.084969 | orchestrator | + local name=ceph-ansible 2025-09-18 00:34:59.084982 | orchestrator | + local attempt_num=1 2025-09-18 00:34:59.084993 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:34:59.119013 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:34:59.119050 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:34:59.119062 | orchestrator | + sleep 5 2025-09-18 00:35:04.125656 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:04.156733 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:04.156842 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:04.156860 | orchestrator | + sleep 5 2025-09-18 00:35:09.159677 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:09.187954 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:09.188007 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:09.188021 | orchestrator | + sleep 5 2025-09-18 00:35:14.193703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:14.240478 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:14.240583 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:14.240599 | orchestrator | + sleep 5 2025-09-18 00:35:19.246508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:19.285705 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:19.285772 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:19.285813 | orchestrator | + sleep 5 2025-09-18 00:35:24.291473 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:24.328623 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:24.328698 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:24.328713 | orchestrator | + sleep 5 2025-09-18 00:35:29.334225 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:29.376559 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:29.376606 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:29.376620 | orchestrator | + sleep 5 2025-09-18 00:35:34.381170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:34.423001 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:34.423087 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:34.423102 | orchestrator | + sleep 5 2025-09-18 00:35:39.429478 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:39.456404 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:39.456480 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:39.456494 | orchestrator | + sleep 5 2025-09-18 00:35:44.461615 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:44.502173 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:44.502254 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:44.502270 | orchestrator | + sleep 5 2025-09-18 00:35:49.507351 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:49.541763 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:49.541866 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:49.541880 | orchestrator | + sleep 5 2025-09-18 00:35:54.546917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:54.586747 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:54.586818 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:54.586832 | orchestrator | + sleep 5 2025-09-18 00:35:59.591505 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:35:59.632554 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 00:35:59.632648 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 00:35:59.632663 | orchestrator | + sleep 5 2025-09-18 00:36:04.637553 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 00:36:04.678519 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:36:04.678613 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-18 00:36:04.678628 | orchestrator | + local max_attempts=60 2025-09-18 00:36:04.678642 | orchestrator | + local name=kolla-ansible 2025-09-18 00:36:04.678654 | orchestrator | + local attempt_num=1 2025-09-18 00:36:04.679164 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-18 00:36:04.718334 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:36:04.718396 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-18 00:36:04.718411 | orchestrator | + local max_attempts=60 2025-09-18 00:36:04.718424 | orchestrator | + local name=osism-ansible 2025-09-18 00:36:04.718435 | orchestrator | + local attempt_num=1 2025-09-18 00:36:04.718615 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-18 00:36:04.753213 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 00:36:04.753320 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-18 00:36:04.753338 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-18 00:36:04.931586 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-18 00:36:05.092812 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-18 00:36:05.248108 | orchestrator | ARA in osism-ansible already disabled. 2025-09-18 00:36:05.380226 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-18 00:36:05.380469 | orchestrator | + osism apply gather-facts 2025-09-18 00:36:17.404073 | orchestrator | 2025-09-18 00:36:17 | INFO  | Task dc946681-2ed9-4ca7-aced-b91b1889ad84 (gather-facts) was prepared for execution. 2025-09-18 00:36:17.404181 | orchestrator | 2025-09-18 00:36:17 | INFO  | It takes a moment until task dc946681-2ed9-4ca7-aced-b91b1889ad84 (gather-facts) has been started and output is visible here. 2025-09-18 00:36:30.706528 | orchestrator | 2025-09-18 00:36:30.706650 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 00:36:30.706696 | orchestrator | 2025-09-18 00:36:30.706711 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:36:30.706723 | orchestrator | Thursday 18 September 2025 00:36:21 +0000 (0:00:00.210) 0:00:00.210 **** 2025-09-18 00:36:30.706735 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:36:30.706748 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:36:30.706759 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:36:30.706771 | orchestrator | ok: [testbed-manager] 2025-09-18 00:36:30.706782 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:36:30.706793 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:36:30.706804 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:36:30.706815 | orchestrator | 2025-09-18 00:36:30.706827 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 00:36:30.706838 | orchestrator | 2025-09-18 00:36:30.706850 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 00:36:30.706861 | orchestrator | Thursday 18 September 2025 00:36:29 +0000 (0:00:08.471) 0:00:08.681 **** 2025-09-18 00:36:30.706873 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:36:30.706885 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:36:30.706897 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:36:30.706908 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:36:30.706919 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:36:30.706931 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:36:30.706942 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:36:30.706953 | orchestrator | 2025-09-18 00:36:30.706964 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:36:30.706976 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.706989 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707000 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707011 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707023 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707034 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707045 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:36:30.707057 | orchestrator | 2025-09-18 00:36:30.707070 | orchestrator | 2025-09-18 00:36:30.707083 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:36:30.707096 | orchestrator | Thursday 18 September 2025 00:36:30 +0000 (0:00:00.518) 0:00:09.200 **** 2025-09-18 00:36:30.707109 | orchestrator | =============================================================================== 2025-09-18 00:36:30.707138 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.47s 2025-09-18 00:36:30.707152 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-18 00:36:31.009752 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-18 00:36:31.023416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-18 00:36:31.040393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-18 00:36:31.050972 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-18 00:36:31.061392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-18 00:36:31.077466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-18 00:36:31.090241 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-18 00:36:31.103550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-18 00:36:31.115112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-18 00:36:31.126340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-18 00:36:31.135083 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-18 00:36:31.143785 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-18 00:36:31.152947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-18 00:36:31.161814 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-18 00:36:31.170654 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-18 00:36:31.180396 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-18 00:36:31.189794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-18 00:36:31.198714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-18 00:36:31.209143 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-18 00:36:31.217992 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-18 00:36:31.227433 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-18 00:36:31.343862 | orchestrator | ok: Runtime: 0:23:06.232665 2025-09-18 00:36:31.433493 | 2025-09-18 00:36:31.433628 | TASK [Deploy services] 2025-09-18 00:36:31.964182 | orchestrator | skipping: Conditional result was False 2025-09-18 00:36:31.981932 | 2025-09-18 00:36:31.982112 | TASK [Deploy in a nutshell] 2025-09-18 00:36:32.661353 | orchestrator | + set -e 2025-09-18 00:36:32.663005 | orchestrator | 2025-09-18 00:36:32.663095 | orchestrator | # PULL IMAGES 2025-09-18 00:36:32.663112 | orchestrator | 2025-09-18 00:36:32.663135 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 00:36:32.663156 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 00:36:32.663169 | orchestrator | ++ INTERACTIVE=false 2025-09-18 00:36:32.663211 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 00:36:32.663232 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 00:36:32.663246 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 00:36:32.663257 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 00:36:32.663298 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 00:36:32.663310 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 00:36:32.663326 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 00:36:32.663336 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 00:36:32.663353 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 00:36:32.663363 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 00:36:32.663376 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 00:36:32.663386 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 00:36:32.663397 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 00:36:32.663407 | orchestrator | ++ export ARA=false 2025-09-18 00:36:32.663417 | orchestrator | ++ ARA=false 2025-09-18 00:36:32.663427 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 00:36:32.663437 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 00:36:32.663446 | orchestrator | ++ export TEMPEST=true 2025-09-18 00:36:32.663456 | orchestrator | ++ TEMPEST=true 2025-09-18 00:36:32.663466 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 00:36:32.663475 | orchestrator | ++ IS_ZUUL=true 2025-09-18 00:36:32.663485 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:36:32.663495 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-18 00:36:32.663505 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 00:36:32.663515 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 00:36:32.663524 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 00:36:32.663535 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 00:36:32.663544 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 00:36:32.663554 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 00:36:32.663564 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 00:36:32.663574 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 00:36:32.663590 | orchestrator | + echo 2025-09-18 00:36:32.663606 | orchestrator | + echo '# PULL IMAGES' 2025-09-18 00:36:32.663623 | orchestrator | + echo 2025-09-18 00:36:32.663674 | orchestrator | ++ semver latest 7.0.0 2025-09-18 00:36:32.726228 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 00:36:32.726356 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 00:36:32.726372 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-18 00:36:34.585003 | orchestrator | 2025-09-18 00:36:34 | INFO  | Trying to run play pull-images in environment custom 2025-09-18 00:36:44.669383 | orchestrator | 2025-09-18 00:36:44 | INFO  | Task 381e4808-dbfd-4f4c-bcab-55717295456a (pull-images) was prepared for execution. 2025-09-18 00:36:44.669504 | orchestrator | 2025-09-18 00:36:44 | INFO  | Task 381e4808-dbfd-4f4c-bcab-55717295456a is running in background. No more output. Check ARA for logs. 2025-09-18 00:36:47.002069 | orchestrator | 2025-09-18 00:36:46 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-18 00:36:57.198679 | orchestrator | 2025-09-18 00:36:57 | INFO  | Task d599e81d-1cad-4b71-85b5-2a188db579d4 (wipe-partitions) was prepared for execution. 2025-09-18 00:36:57.198810 | orchestrator | 2025-09-18 00:36:57 | INFO  | It takes a moment until task d599e81d-1cad-4b71-85b5-2a188db579d4 (wipe-partitions) has been started and output is visible here. 2025-09-18 00:37:10.665618 | orchestrator | 2025-09-18 00:37:10.665729 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-18 00:37:10.665744 | orchestrator | 2025-09-18 00:37:10.665755 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-18 00:37:10.665770 | orchestrator | Thursday 18 September 2025 00:37:02 +0000 (0:00:00.102) 0:00:00.102 **** 2025-09-18 00:37:10.665780 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:37:10.665791 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:37:10.665802 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:37:10.665812 | orchestrator | 2025-09-18 00:37:10.665822 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-18 00:37:10.665855 | orchestrator | Thursday 18 September 2025 00:37:02 +0000 (0:00:00.542) 0:00:00.644 **** 2025-09-18 00:37:10.665866 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:10.665876 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:37:10.665889 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:37:10.665899 | orchestrator | 2025-09-18 00:37:10.665909 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-18 00:37:10.665919 | orchestrator | Thursday 18 September 2025 00:37:03 +0000 (0:00:00.260) 0:00:00.905 **** 2025-09-18 00:37:10.665928 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:37:10.665939 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:37:10.665948 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:10.665958 | orchestrator | 2025-09-18 00:37:10.665968 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-18 00:37:10.665977 | orchestrator | Thursday 18 September 2025 00:37:03 +0000 (0:00:00.670) 0:00:01.576 **** 2025-09-18 00:37:10.665987 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:10.665996 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:37:10.666006 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:37:10.666015 | orchestrator | 2025-09-18 00:37:10.666079 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-18 00:37:10.666089 | orchestrator | Thursday 18 September 2025 00:37:03 +0000 (0:00:00.231) 0:00:01.808 **** 2025-09-18 00:37:10.666098 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 00:37:10.666116 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 00:37:10.666126 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 00:37:10.666168 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 00:37:10.666179 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 00:37:10.666190 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 00:37:10.666202 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 00:37:10.666213 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 00:37:10.666225 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 00:37:10.666236 | orchestrator | 2025-09-18 00:37:10.666271 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-18 00:37:10.666282 | orchestrator | Thursday 18 September 2025 00:37:05 +0000 (0:00:01.218) 0:00:03.026 **** 2025-09-18 00:37:10.666292 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 00:37:10.666302 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 00:37:10.666311 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 00:37:10.666321 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 00:37:10.666330 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 00:37:10.666340 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 00:37:10.666349 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 00:37:10.666359 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 00:37:10.666368 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 00:37:10.666378 | orchestrator | 2025-09-18 00:37:10.666388 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-18 00:37:10.666398 | orchestrator | Thursday 18 September 2025 00:37:06 +0000 (0:00:01.476) 0:00:04.503 **** 2025-09-18 00:37:10.666407 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 00:37:10.666417 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 00:37:10.666427 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 00:37:10.666436 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 00:37:10.666446 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 00:37:10.666455 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 00:37:10.666465 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 00:37:10.666484 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 00:37:10.666500 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 00:37:10.666510 | orchestrator | 2025-09-18 00:37:10.666520 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-18 00:37:10.666530 | orchestrator | Thursday 18 September 2025 00:37:09 +0000 (0:00:02.377) 0:00:06.880 **** 2025-09-18 00:37:10.666539 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:37:10.666549 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:37:10.666559 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:37:10.666568 | orchestrator | 2025-09-18 00:37:10.666578 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-18 00:37:10.666587 | orchestrator | Thursday 18 September 2025 00:37:09 +0000 (0:00:00.615) 0:00:07.496 **** 2025-09-18 00:37:10.666597 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:37:10.666606 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:37:10.666616 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:37:10.666625 | orchestrator | 2025-09-18 00:37:10.666635 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:37:10.666647 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:10.666658 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:10.666686 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:10.666696 | orchestrator | 2025-09-18 00:37:10.666705 | orchestrator | 2025-09-18 00:37:10.666715 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:37:10.666725 | orchestrator | Thursday 18 September 2025 00:37:10 +0000 (0:00:00.625) 0:00:08.121 **** 2025-09-18 00:37:10.666734 | orchestrator | =============================================================================== 2025-09-18 00:37:10.666744 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.38s 2025-09-18 00:37:10.666753 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2025-09-18 00:37:10.666763 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2025-09-18 00:37:10.666773 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.67s 2025-09-18 00:37:10.666782 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-18 00:37:10.666792 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-09-18 00:37:10.666801 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-09-18 00:37:10.666811 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-18 00:37:10.666820 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-18 00:37:23.008131 | orchestrator | 2025-09-18 00:37:23 | INFO  | Task bdbd990a-97b0-41fd-a320-e5bee089d159 (facts) was prepared for execution. 2025-09-18 00:37:23.008323 | orchestrator | 2025-09-18 00:37:23 | INFO  | It takes a moment until task bdbd990a-97b0-41fd-a320-e5bee089d159 (facts) has been started and output is visible here. 2025-09-18 00:37:36.136810 | orchestrator | 2025-09-18 00:37:36.136933 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-18 00:37:36.136952 | orchestrator | 2025-09-18 00:37:36.136965 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 00:37:36.136977 | orchestrator | Thursday 18 September 2025 00:37:26 +0000 (0:00:00.267) 0:00:00.267 **** 2025-09-18 00:37:36.136989 | orchestrator | ok: [testbed-manager] 2025-09-18 00:37:36.137000 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:37:36.137011 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:37:36.137045 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:37:36.137056 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:36.137067 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:37:36.137077 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:37:36.137088 | orchestrator | 2025-09-18 00:37:36.137099 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 00:37:36.137110 | orchestrator | Thursday 18 September 2025 00:37:28 +0000 (0:00:01.082) 0:00:01.350 **** 2025-09-18 00:37:36.137120 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:37:36.137132 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:37:36.137142 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:37:36.137153 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:37:36.137163 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:36.137174 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:37:36.137184 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:37:36.137195 | orchestrator | 2025-09-18 00:37:36.137206 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 00:37:36.137219 | orchestrator | 2025-09-18 00:37:36.137311 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:37:36.137326 | orchestrator | Thursday 18 September 2025 00:37:29 +0000 (0:00:01.265) 0:00:02.615 **** 2025-09-18 00:37:36.137337 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:37:36.137348 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:37:36.137360 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:37:36.137373 | orchestrator | ok: [testbed-manager] 2025-09-18 00:37:36.137386 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:37:36.137398 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:36.137410 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:37:36.137423 | orchestrator | 2025-09-18 00:37:36.137435 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 00:37:36.137447 | orchestrator | 2025-09-18 00:37:36.137460 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 00:37:36.137472 | orchestrator | Thursday 18 September 2025 00:37:34 +0000 (0:00:05.669) 0:00:08.284 **** 2025-09-18 00:37:36.137484 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:37:36.137496 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:37:36.137508 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:37:36.137520 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:37:36.137532 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:36.137544 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:37:36.137556 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:37:36.137568 | orchestrator | 2025-09-18 00:37:36.137580 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:37:36.137593 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137607 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137619 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137631 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137643 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137656 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137668 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:37:36.137681 | orchestrator | 2025-09-18 00:37:36.137702 | orchestrator | 2025-09-18 00:37:36.137713 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:37:36.137724 | orchestrator | Thursday 18 September 2025 00:37:35 +0000 (0:00:00.800) 0:00:09.085 **** 2025-09-18 00:37:36.137735 | orchestrator | =============================================================================== 2025-09-18 00:37:36.137746 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2025-09-18 00:37:36.137756 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-09-18 00:37:36.137767 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-18 00:37:36.137778 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2025-09-18 00:37:38.542298 | orchestrator | 2025-09-18 00:37:38 | INFO  | Task 6673b4bb-542b-433c-8ed9-acd60e2d4ee2 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-18 00:37:38.542400 | orchestrator | 2025-09-18 00:37:38 | INFO  | It takes a moment until task 6673b4bb-542b-433c-8ed9-acd60e2d4ee2 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-18 00:37:50.473994 | orchestrator | 2025-09-18 00:37:50.474160 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 00:37:50.474180 | orchestrator | 2025-09-18 00:37:50.474193 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:37:50.474205 | orchestrator | Thursday 18 September 2025 00:37:42 +0000 (0:00:00.325) 0:00:00.325 **** 2025-09-18 00:37:50.474216 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 00:37:50.474228 | orchestrator | 2025-09-18 00:37:50.474291 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:37:50.474305 | orchestrator | Thursday 18 September 2025 00:37:43 +0000 (0:00:00.258) 0:00:00.583 **** 2025-09-18 00:37:50.474316 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:50.474328 | orchestrator | 2025-09-18 00:37:50.474339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474350 | orchestrator | Thursday 18 September 2025 00:37:43 +0000 (0:00:00.223) 0:00:00.807 **** 2025-09-18 00:37:50.474361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-18 00:37:50.474373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-18 00:37:50.474384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-18 00:37:50.474406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-18 00:37:50.474418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-18 00:37:50.474429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-18 00:37:50.474439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-18 00:37:50.474450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-18 00:37:50.474461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-18 00:37:50.474472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-18 00:37:50.474483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-18 00:37:50.474493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-18 00:37:50.474504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-18 00:37:50.474517 | orchestrator | 2025-09-18 00:37:50.474529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474541 | orchestrator | Thursday 18 September 2025 00:37:43 +0000 (0:00:00.394) 0:00:01.201 **** 2025-09-18 00:37:50.474553 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474584 | orchestrator | 2025-09-18 00:37:50.474598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474611 | orchestrator | Thursday 18 September 2025 00:37:44 +0000 (0:00:00.533) 0:00:01.735 **** 2025-09-18 00:37:50.474623 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474636 | orchestrator | 2025-09-18 00:37:50.474648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474661 | orchestrator | Thursday 18 September 2025 00:37:44 +0000 (0:00:00.205) 0:00:01.940 **** 2025-09-18 00:37:50.474673 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474685 | orchestrator | 2025-09-18 00:37:50.474698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474710 | orchestrator | Thursday 18 September 2025 00:37:44 +0000 (0:00:00.194) 0:00:02.135 **** 2025-09-18 00:37:50.474722 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474739 | orchestrator | 2025-09-18 00:37:50.474752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474765 | orchestrator | Thursday 18 September 2025 00:37:44 +0000 (0:00:00.187) 0:00:02.322 **** 2025-09-18 00:37:50.474777 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474790 | orchestrator | 2025-09-18 00:37:50.474803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474816 | orchestrator | Thursday 18 September 2025 00:37:44 +0000 (0:00:00.195) 0:00:02.517 **** 2025-09-18 00:37:50.474828 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474840 | orchestrator | 2025-09-18 00:37:50.474853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474865 | orchestrator | Thursday 18 September 2025 00:37:45 +0000 (0:00:00.181) 0:00:02.699 **** 2025-09-18 00:37:50.474876 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474887 | orchestrator | 2025-09-18 00:37:50.474897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474908 | orchestrator | Thursday 18 September 2025 00:37:45 +0000 (0:00:00.202) 0:00:02.902 **** 2025-09-18 00:37:50.474919 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.474930 | orchestrator | 2025-09-18 00:37:50.474940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.474951 | orchestrator | Thursday 18 September 2025 00:37:45 +0000 (0:00:00.210) 0:00:03.112 **** 2025-09-18 00:37:50.474962 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f) 2025-09-18 00:37:50.474974 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f) 2025-09-18 00:37:50.474984 | orchestrator | 2025-09-18 00:37:50.474995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.475006 | orchestrator | Thursday 18 September 2025 00:37:45 +0000 (0:00:00.410) 0:00:03.523 **** 2025-09-18 00:37:50.475034 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca) 2025-09-18 00:37:50.475046 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca) 2025-09-18 00:37:50.475057 | orchestrator | 2025-09-18 00:37:50.475067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.475078 | orchestrator | Thursday 18 September 2025 00:37:46 +0000 (0:00:00.432) 0:00:03.955 **** 2025-09-18 00:37:50.475095 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738) 2025-09-18 00:37:50.475106 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738) 2025-09-18 00:37:50.475117 | orchestrator | 2025-09-18 00:37:50.475127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.475138 | orchestrator | Thursday 18 September 2025 00:37:47 +0000 (0:00:00.652) 0:00:04.608 **** 2025-09-18 00:37:50.475148 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97) 2025-09-18 00:37:50.475168 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97) 2025-09-18 00:37:50.475178 | orchestrator | 2025-09-18 00:37:50.475189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:50.475199 | orchestrator | Thursday 18 September 2025 00:37:47 +0000 (0:00:00.691) 0:00:05.299 **** 2025-09-18 00:37:50.475210 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:37:50.475221 | orchestrator | 2025-09-18 00:37:50.475232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475264 | orchestrator | Thursday 18 September 2025 00:37:48 +0000 (0:00:00.724) 0:00:06.024 **** 2025-09-18 00:37:50.475275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-18 00:37:50.475286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-18 00:37:50.475297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-18 00:37:50.475308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-18 00:37:50.475318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-18 00:37:50.475329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-18 00:37:50.475339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-18 00:37:50.475350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-18 00:37:50.475360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-18 00:37:50.475371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-18 00:37:50.475382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-18 00:37:50.475393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-18 00:37:50.475403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-18 00:37:50.475414 | orchestrator | 2025-09-18 00:37:50.475425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475436 | orchestrator | Thursday 18 September 2025 00:37:48 +0000 (0:00:00.423) 0:00:06.447 **** 2025-09-18 00:37:50.475446 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475457 | orchestrator | 2025-09-18 00:37:50.475468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475478 | orchestrator | Thursday 18 September 2025 00:37:49 +0000 (0:00:00.205) 0:00:06.653 **** 2025-09-18 00:37:50.475489 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475500 | orchestrator | 2025-09-18 00:37:50.475511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475521 | orchestrator | Thursday 18 September 2025 00:37:49 +0000 (0:00:00.196) 0:00:06.849 **** 2025-09-18 00:37:50.475532 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475543 | orchestrator | 2025-09-18 00:37:50.475553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475564 | orchestrator | Thursday 18 September 2025 00:37:49 +0000 (0:00:00.202) 0:00:07.051 **** 2025-09-18 00:37:50.475575 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475586 | orchestrator | 2025-09-18 00:37:50.475596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475607 | orchestrator | Thursday 18 September 2025 00:37:49 +0000 (0:00:00.209) 0:00:07.261 **** 2025-09-18 00:37:50.475618 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475628 | orchestrator | 2025-09-18 00:37:50.475639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475657 | orchestrator | Thursday 18 September 2025 00:37:49 +0000 (0:00:00.194) 0:00:07.455 **** 2025-09-18 00:37:50.475668 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475679 | orchestrator | 2025-09-18 00:37:50.475689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475700 | orchestrator | Thursday 18 September 2025 00:37:50 +0000 (0:00:00.197) 0:00:07.653 **** 2025-09-18 00:37:50.475710 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:50.475721 | orchestrator | 2025-09-18 00:37:50.475732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:50.475742 | orchestrator | Thursday 18 September 2025 00:37:50 +0000 (0:00:00.196) 0:00:07.849 **** 2025-09-18 00:37:50.475760 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160161 | orchestrator | 2025-09-18 00:37:58.160294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:58.160307 | orchestrator | Thursday 18 September 2025 00:37:50 +0000 (0:00:00.195) 0:00:08.045 **** 2025-09-18 00:37:58.160315 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-18 00:37:58.160323 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-18 00:37:58.160330 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-18 00:37:58.160336 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-18 00:37:58.160343 | orchestrator | 2025-09-18 00:37:58.160349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:58.160356 | orchestrator | Thursday 18 September 2025 00:37:51 +0000 (0:00:01.049) 0:00:09.095 **** 2025-09-18 00:37:58.160378 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160384 | orchestrator | 2025-09-18 00:37:58.160391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:58.160397 | orchestrator | Thursday 18 September 2025 00:37:51 +0000 (0:00:00.207) 0:00:09.302 **** 2025-09-18 00:37:58.160403 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160409 | orchestrator | 2025-09-18 00:37:58.160416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:58.160422 | orchestrator | Thursday 18 September 2025 00:37:51 +0000 (0:00:00.193) 0:00:09.496 **** 2025-09-18 00:37:58.160429 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160435 | orchestrator | 2025-09-18 00:37:58.160441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:37:58.160448 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.212) 0:00:09.709 **** 2025-09-18 00:37:58.160454 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160460 | orchestrator | 2025-09-18 00:37:58.160466 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 00:37:58.160473 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.210) 0:00:09.919 **** 2025-09-18 00:37:58.160479 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-18 00:37:58.160485 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-18 00:37:58.160492 | orchestrator | 2025-09-18 00:37:58.160498 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 00:37:58.160504 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.180) 0:00:10.100 **** 2025-09-18 00:37:58.160510 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160517 | orchestrator | 2025-09-18 00:37:58.160523 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 00:37:58.160529 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.135) 0:00:10.236 **** 2025-09-18 00:37:58.160535 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160542 | orchestrator | 2025-09-18 00:37:58.160548 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 00:37:58.160554 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.141) 0:00:10.377 **** 2025-09-18 00:37:58.160561 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160583 | orchestrator | 2025-09-18 00:37:58.160589 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 00:37:58.160595 | orchestrator | Thursday 18 September 2025 00:37:52 +0000 (0:00:00.134) 0:00:10.511 **** 2025-09-18 00:37:58.160602 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:58.160608 | orchestrator | 2025-09-18 00:37:58.160614 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 00:37:58.160620 | orchestrator | Thursday 18 September 2025 00:37:53 +0000 (0:00:00.145) 0:00:10.657 **** 2025-09-18 00:37:58.160627 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cde6920-619d-54be-8750-7c50463ca655'}}) 2025-09-18 00:37:58.160634 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ac78a0a-4049-5f74-bf32-d6052d628b7d'}}) 2025-09-18 00:37:58.160640 | orchestrator | 2025-09-18 00:37:58.160646 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 00:37:58.160652 | orchestrator | Thursday 18 September 2025 00:37:53 +0000 (0:00:00.194) 0:00:10.851 **** 2025-09-18 00:37:58.160659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cde6920-619d-54be-8750-7c50463ca655'}})  2025-09-18 00:37:58.160672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ac78a0a-4049-5f74-bf32-d6052d628b7d'}})  2025-09-18 00:37:58.160678 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160685 | orchestrator | 2025-09-18 00:37:58.160691 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 00:37:58.160697 | orchestrator | Thursday 18 September 2025 00:37:53 +0000 (0:00:00.157) 0:00:11.009 **** 2025-09-18 00:37:58.160703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cde6920-619d-54be-8750-7c50463ca655'}})  2025-09-18 00:37:58.160709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ac78a0a-4049-5f74-bf32-d6052d628b7d'}})  2025-09-18 00:37:58.160715 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160722 | orchestrator | 2025-09-18 00:37:58.160729 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 00:37:58.160736 | orchestrator | Thursday 18 September 2025 00:37:53 +0000 (0:00:00.354) 0:00:11.363 **** 2025-09-18 00:37:58.160743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cde6920-619d-54be-8750-7c50463ca655'}})  2025-09-18 00:37:58.160751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ac78a0a-4049-5f74-bf32-d6052d628b7d'}})  2025-09-18 00:37:58.160758 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160764 | orchestrator | 2025-09-18 00:37:58.160783 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 00:37:58.160791 | orchestrator | Thursday 18 September 2025 00:37:53 +0000 (0:00:00.155) 0:00:11.518 **** 2025-09-18 00:37:58.160798 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:58.160805 | orchestrator | 2025-09-18 00:37:58.160812 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 00:37:58.160820 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.154) 0:00:11.673 **** 2025-09-18 00:37:58.160827 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:37:58.160834 | orchestrator | 2025-09-18 00:37:58.160841 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 00:37:58.160848 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.148) 0:00:11.821 **** 2025-09-18 00:37:58.160856 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160863 | orchestrator | 2025-09-18 00:37:58.160870 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 00:37:58.160877 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.141) 0:00:11.962 **** 2025-09-18 00:37:58.160884 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160891 | orchestrator | 2025-09-18 00:37:58.160904 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 00:37:58.160911 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.134) 0:00:12.097 **** 2025-09-18 00:37:58.160918 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.160925 | orchestrator | 2025-09-18 00:37:58.160932 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 00:37:58.160939 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.143) 0:00:12.241 **** 2025-09-18 00:37:58.160947 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:37:58.160954 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:37:58.160961 | orchestrator |  "sdb": { 2025-09-18 00:37:58.160969 | orchestrator |  "osd_lvm_uuid": "0cde6920-619d-54be-8750-7c50463ca655" 2025-09-18 00:37:58.160976 | orchestrator |  }, 2025-09-18 00:37:58.160984 | orchestrator |  "sdc": { 2025-09-18 00:37:58.160992 | orchestrator |  "osd_lvm_uuid": "3ac78a0a-4049-5f74-bf32-d6052d628b7d" 2025-09-18 00:37:58.160999 | orchestrator |  } 2025-09-18 00:37:58.161006 | orchestrator |  } 2025-09-18 00:37:58.161013 | orchestrator | } 2025-09-18 00:37:58.161020 | orchestrator | 2025-09-18 00:37:58.161027 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 00:37:58.161034 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.152) 0:00:12.394 **** 2025-09-18 00:37:58.161041 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.161048 | orchestrator | 2025-09-18 00:37:58.161055 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 00:37:58.161063 | orchestrator | Thursday 18 September 2025 00:37:54 +0000 (0:00:00.158) 0:00:12.553 **** 2025-09-18 00:37:58.161074 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.161081 | orchestrator | 2025-09-18 00:37:58.161087 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 00:37:58.161093 | orchestrator | Thursday 18 September 2025 00:37:55 +0000 (0:00:00.142) 0:00:12.695 **** 2025-09-18 00:37:58.161099 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:37:58.161106 | orchestrator | 2025-09-18 00:37:58.161112 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 00:37:58.161118 | orchestrator | Thursday 18 September 2025 00:37:55 +0000 (0:00:00.136) 0:00:12.831 **** 2025-09-18 00:37:58.161124 | orchestrator | changed: [testbed-node-3] => { 2025-09-18 00:37:58.161130 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 00:37:58.161136 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:37:58.161142 | orchestrator |  "sdb": { 2025-09-18 00:37:58.161148 | orchestrator |  "osd_lvm_uuid": "0cde6920-619d-54be-8750-7c50463ca655" 2025-09-18 00:37:58.161155 | orchestrator |  }, 2025-09-18 00:37:58.161161 | orchestrator |  "sdc": { 2025-09-18 00:37:58.161167 | orchestrator |  "osd_lvm_uuid": "3ac78a0a-4049-5f74-bf32-d6052d628b7d" 2025-09-18 00:37:58.161173 | orchestrator |  } 2025-09-18 00:37:58.161179 | orchestrator |  }, 2025-09-18 00:37:58.161185 | orchestrator |  "lvm_volumes": [ 2025-09-18 00:37:58.161192 | orchestrator |  { 2025-09-18 00:37:58.161198 | orchestrator |  "data": "osd-block-0cde6920-619d-54be-8750-7c50463ca655", 2025-09-18 00:37:58.161204 | orchestrator |  "data_vg": "ceph-0cde6920-619d-54be-8750-7c50463ca655" 2025-09-18 00:37:58.161210 | orchestrator |  }, 2025-09-18 00:37:58.161217 | orchestrator |  { 2025-09-18 00:37:58.161223 | orchestrator |  "data": "osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d", 2025-09-18 00:37:58.161229 | orchestrator |  "data_vg": "ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d" 2025-09-18 00:37:58.161235 | orchestrator |  } 2025-09-18 00:37:58.161255 | orchestrator |  ] 2025-09-18 00:37:58.161262 | orchestrator |  } 2025-09-18 00:37:58.161268 | orchestrator | } 2025-09-18 00:37:58.161274 | orchestrator | 2025-09-18 00:37:58.161281 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 00:37:58.161294 | orchestrator | Thursday 18 September 2025 00:37:55 +0000 (0:00:00.220) 0:00:13.052 **** 2025-09-18 00:37:58.161300 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 00:37:58.161307 | orchestrator | 2025-09-18 00:37:58.161313 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 00:37:58.161319 | orchestrator | 2025-09-18 00:37:58.161325 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:37:58.161331 | orchestrator | Thursday 18 September 2025 00:37:57 +0000 (0:00:02.193) 0:00:15.246 **** 2025-09-18 00:37:58.161338 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 00:37:58.161344 | orchestrator | 2025-09-18 00:37:58.161350 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:37:58.161356 | orchestrator | Thursday 18 September 2025 00:37:57 +0000 (0:00:00.247) 0:00:15.493 **** 2025-09-18 00:37:58.161362 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:37:58.161369 | orchestrator | 2025-09-18 00:37:58.161375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:37:58.161385 | orchestrator | Thursday 18 September 2025 00:37:58 +0000 (0:00:00.242) 0:00:15.735 **** 2025-09-18 00:38:05.701217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-18 00:38:05.701351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-18 00:38:05.701367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-18 00:38:05.701379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-18 00:38:05.701390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-18 00:38:05.701401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-18 00:38:05.701412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-18 00:38:05.701423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-18 00:38:05.701434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-18 00:38:05.701445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-18 00:38:05.701472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-18 00:38:05.701484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-18 00:38:05.701495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-18 00:38:05.701509 | orchestrator | 2025-09-18 00:38:05.701521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701533 | orchestrator | Thursday 18 September 2025 00:37:58 +0000 (0:00:00.376) 0:00:16.112 **** 2025-09-18 00:38:05.701545 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701556 | orchestrator | 2025-09-18 00:38:05.701567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701578 | orchestrator | Thursday 18 September 2025 00:37:58 +0000 (0:00:00.204) 0:00:16.316 **** 2025-09-18 00:38:05.701589 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701600 | orchestrator | 2025-09-18 00:38:05.701611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701622 | orchestrator | Thursday 18 September 2025 00:37:58 +0000 (0:00:00.184) 0:00:16.501 **** 2025-09-18 00:38:05.701633 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701644 | orchestrator | 2025-09-18 00:38:05.701668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701689 | orchestrator | Thursday 18 September 2025 00:37:59 +0000 (0:00:00.222) 0:00:16.723 **** 2025-09-18 00:38:05.701700 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701732 | orchestrator | 2025-09-18 00:38:05.701744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701755 | orchestrator | Thursday 18 September 2025 00:37:59 +0000 (0:00:00.189) 0:00:16.913 **** 2025-09-18 00:38:05.701766 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701779 | orchestrator | 2025-09-18 00:38:05.701792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701804 | orchestrator | Thursday 18 September 2025 00:37:59 +0000 (0:00:00.656) 0:00:17.569 **** 2025-09-18 00:38:05.701817 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701829 | orchestrator | 2025-09-18 00:38:05.701843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701855 | orchestrator | Thursday 18 September 2025 00:38:00 +0000 (0:00:00.199) 0:00:17.769 **** 2025-09-18 00:38:05.701868 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701880 | orchestrator | 2025-09-18 00:38:05.701893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701905 | orchestrator | Thursday 18 September 2025 00:38:00 +0000 (0:00:00.222) 0:00:17.991 **** 2025-09-18 00:38:05.701918 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.701931 | orchestrator | 2025-09-18 00:38:05.701944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.701957 | orchestrator | Thursday 18 September 2025 00:38:00 +0000 (0:00:00.228) 0:00:18.219 **** 2025-09-18 00:38:05.701970 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e) 2025-09-18 00:38:05.701984 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e) 2025-09-18 00:38:05.701998 | orchestrator | 2025-09-18 00:38:05.702011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.702071 | orchestrator | Thursday 18 September 2025 00:38:01 +0000 (0:00:00.435) 0:00:18.655 **** 2025-09-18 00:38:05.702084 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8) 2025-09-18 00:38:05.702096 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8) 2025-09-18 00:38:05.702109 | orchestrator | 2025-09-18 00:38:05.702122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.702135 | orchestrator | Thursday 18 September 2025 00:38:01 +0000 (0:00:00.497) 0:00:19.152 **** 2025-09-18 00:38:05.702145 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31) 2025-09-18 00:38:05.702156 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31) 2025-09-18 00:38:05.702167 | orchestrator | 2025-09-18 00:38:05.702177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.702188 | orchestrator | Thursday 18 September 2025 00:38:02 +0000 (0:00:00.442) 0:00:19.595 **** 2025-09-18 00:38:05.702215 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d) 2025-09-18 00:38:05.702226 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d) 2025-09-18 00:38:05.702237 | orchestrator | 2025-09-18 00:38:05.702265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:05.702277 | orchestrator | Thursday 18 September 2025 00:38:02 +0000 (0:00:00.433) 0:00:20.029 **** 2025-09-18 00:38:05.702288 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:38:05.702298 | orchestrator | 2025-09-18 00:38:05.702309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702326 | orchestrator | Thursday 18 September 2025 00:38:02 +0000 (0:00:00.346) 0:00:20.375 **** 2025-09-18 00:38:05.702337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-18 00:38:05.702357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-18 00:38:05.702368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-18 00:38:05.702379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-18 00:38:05.702390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-18 00:38:05.702400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-18 00:38:05.702411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-18 00:38:05.702422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-18 00:38:05.702432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-18 00:38:05.702443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-18 00:38:05.702453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-18 00:38:05.702464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-18 00:38:05.702474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-18 00:38:05.702485 | orchestrator | 2025-09-18 00:38:05.702496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702506 | orchestrator | Thursday 18 September 2025 00:38:03 +0000 (0:00:00.375) 0:00:20.751 **** 2025-09-18 00:38:05.702517 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702528 | orchestrator | 2025-09-18 00:38:05.702539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702550 | orchestrator | Thursday 18 September 2025 00:38:03 +0000 (0:00:00.188) 0:00:20.939 **** 2025-09-18 00:38:05.702560 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702571 | orchestrator | 2025-09-18 00:38:05.702582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702593 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.674) 0:00:21.614 **** 2025-09-18 00:38:05.702604 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702614 | orchestrator | 2025-09-18 00:38:05.702625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702636 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.210) 0:00:21.824 **** 2025-09-18 00:38:05.702647 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702657 | orchestrator | 2025-09-18 00:38:05.702668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702679 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.188) 0:00:22.013 **** 2025-09-18 00:38:05.702690 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702700 | orchestrator | 2025-09-18 00:38:05.702711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702722 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.142) 0:00:22.156 **** 2025-09-18 00:38:05.702732 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702743 | orchestrator | 2025-09-18 00:38:05.702754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702765 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.182) 0:00:22.339 **** 2025-09-18 00:38:05.702775 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702786 | orchestrator | 2025-09-18 00:38:05.702797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702808 | orchestrator | Thursday 18 September 2025 00:38:04 +0000 (0:00:00.146) 0:00:22.485 **** 2025-09-18 00:38:05.702818 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702829 | orchestrator | 2025-09-18 00:38:05.702840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702856 | orchestrator | Thursday 18 September 2025 00:38:05 +0000 (0:00:00.138) 0:00:22.624 **** 2025-09-18 00:38:05.702867 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-18 00:38:05.702879 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-18 00:38:05.702889 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-18 00:38:05.702900 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-18 00:38:05.702911 | orchestrator | 2025-09-18 00:38:05.702921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:05.702932 | orchestrator | Thursday 18 September 2025 00:38:05 +0000 (0:00:00.482) 0:00:23.106 **** 2025-09-18 00:38:05.702943 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:05.702954 | orchestrator | 2025-09-18 00:38:05.702971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:10.971301 | orchestrator | Thursday 18 September 2025 00:38:05 +0000 (0:00:00.173) 0:00:23.280 **** 2025-09-18 00:38:10.971390 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971406 | orchestrator | 2025-09-18 00:38:10.971419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:10.971430 | orchestrator | Thursday 18 September 2025 00:38:05 +0000 (0:00:00.163) 0:00:23.443 **** 2025-09-18 00:38:10.971441 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971452 | orchestrator | 2025-09-18 00:38:10.971464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:10.971475 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.184) 0:00:23.627 **** 2025-09-18 00:38:10.971485 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971496 | orchestrator | 2025-09-18 00:38:10.971524 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 00:38:10.971536 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.171) 0:00:23.799 **** 2025-09-18 00:38:10.971546 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-18 00:38:10.971557 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-18 00:38:10.971568 | orchestrator | 2025-09-18 00:38:10.971579 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 00:38:10.971589 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.288) 0:00:24.087 **** 2025-09-18 00:38:10.971600 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971611 | orchestrator | 2025-09-18 00:38:10.971622 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 00:38:10.971633 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.112) 0:00:24.200 **** 2025-09-18 00:38:10.971644 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971655 | orchestrator | 2025-09-18 00:38:10.971666 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 00:38:10.971677 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.121) 0:00:24.321 **** 2025-09-18 00:38:10.971687 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971698 | orchestrator | 2025-09-18 00:38:10.971709 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 00:38:10.971720 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.122) 0:00:24.444 **** 2025-09-18 00:38:10.971731 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:38:10.971742 | orchestrator | 2025-09-18 00:38:10.971753 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 00:38:10.971764 | orchestrator | Thursday 18 September 2025 00:38:06 +0000 (0:00:00.136) 0:00:24.580 **** 2025-09-18 00:38:10.971775 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b959ef4-2353-55d9-9e37-ea43ed82416b'}}) 2025-09-18 00:38:10.971786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '652709a4-002d-5e7f-9b0a-9f9e264992f4'}}) 2025-09-18 00:38:10.971797 | orchestrator | 2025-09-18 00:38:10.971808 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 00:38:10.971837 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.133) 0:00:24.714 **** 2025-09-18 00:38:10.971851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b959ef4-2353-55d9-9e37-ea43ed82416b'}})  2025-09-18 00:38:10.971864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '652709a4-002d-5e7f-9b0a-9f9e264992f4'}})  2025-09-18 00:38:10.971877 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971889 | orchestrator | 2025-09-18 00:38:10.971901 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 00:38:10.971915 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.127) 0:00:24.841 **** 2025-09-18 00:38:10.971927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b959ef4-2353-55d9-9e37-ea43ed82416b'}})  2025-09-18 00:38:10.971940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '652709a4-002d-5e7f-9b0a-9f9e264992f4'}})  2025-09-18 00:38:10.971953 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.971964 | orchestrator | 2025-09-18 00:38:10.971976 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 00:38:10.971989 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.138) 0:00:24.980 **** 2025-09-18 00:38:10.972002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b959ef4-2353-55d9-9e37-ea43ed82416b'}})  2025-09-18 00:38:10.972014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '652709a4-002d-5e7f-9b0a-9f9e264992f4'}})  2025-09-18 00:38:10.972027 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972038 | orchestrator | 2025-09-18 00:38:10.972050 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 00:38:10.972063 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.136) 0:00:25.116 **** 2025-09-18 00:38:10.972075 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:38:10.972087 | orchestrator | 2025-09-18 00:38:10.972099 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 00:38:10.972111 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.119) 0:00:25.235 **** 2025-09-18 00:38:10.972123 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:38:10.972136 | orchestrator | 2025-09-18 00:38:10.972147 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 00:38:10.972160 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.127) 0:00:25.362 **** 2025-09-18 00:38:10.972172 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972183 | orchestrator | 2025-09-18 00:38:10.972208 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 00:38:10.972219 | orchestrator | Thursday 18 September 2025 00:38:07 +0000 (0:00:00.115) 0:00:25.477 **** 2025-09-18 00:38:10.972230 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972241 | orchestrator | 2025-09-18 00:38:10.972274 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 00:38:10.972285 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.276) 0:00:25.754 **** 2025-09-18 00:38:10.972296 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972307 | orchestrator | 2025-09-18 00:38:10.972317 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 00:38:10.972328 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.116) 0:00:25.870 **** 2025-09-18 00:38:10.972339 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:38:10.972350 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:38:10.972361 | orchestrator |  "sdb": { 2025-09-18 00:38:10.972373 | orchestrator |  "osd_lvm_uuid": "7b959ef4-2353-55d9-9e37-ea43ed82416b" 2025-09-18 00:38:10.972384 | orchestrator |  }, 2025-09-18 00:38:10.972395 | orchestrator |  "sdc": { 2025-09-18 00:38:10.972414 | orchestrator |  "osd_lvm_uuid": "652709a4-002d-5e7f-9b0a-9f9e264992f4" 2025-09-18 00:38:10.972425 | orchestrator |  } 2025-09-18 00:38:10.972436 | orchestrator |  } 2025-09-18 00:38:10.972447 | orchestrator | } 2025-09-18 00:38:10.972458 | orchestrator | 2025-09-18 00:38:10.972469 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 00:38:10.972479 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.106) 0:00:25.977 **** 2025-09-18 00:38:10.972490 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972501 | orchestrator | 2025-09-18 00:38:10.972517 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 00:38:10.972528 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.114) 0:00:26.092 **** 2025-09-18 00:38:10.972539 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972549 | orchestrator | 2025-09-18 00:38:10.972560 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 00:38:10.972571 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.094) 0:00:26.186 **** 2025-09-18 00:38:10.972582 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:38:10.972593 | orchestrator | 2025-09-18 00:38:10.972603 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 00:38:10.972614 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.101) 0:00:26.288 **** 2025-09-18 00:38:10.972625 | orchestrator | changed: [testbed-node-4] => { 2025-09-18 00:38:10.972636 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 00:38:10.972646 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:38:10.972657 | orchestrator |  "sdb": { 2025-09-18 00:38:10.972668 | orchestrator |  "osd_lvm_uuid": "7b959ef4-2353-55d9-9e37-ea43ed82416b" 2025-09-18 00:38:10.972683 | orchestrator |  }, 2025-09-18 00:38:10.972694 | orchestrator |  "sdc": { 2025-09-18 00:38:10.972705 | orchestrator |  "osd_lvm_uuid": "652709a4-002d-5e7f-9b0a-9f9e264992f4" 2025-09-18 00:38:10.972716 | orchestrator |  } 2025-09-18 00:38:10.972726 | orchestrator |  }, 2025-09-18 00:38:10.972737 | orchestrator |  "lvm_volumes": [ 2025-09-18 00:38:10.972748 | orchestrator |  { 2025-09-18 00:38:10.972759 | orchestrator |  "data": "osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b", 2025-09-18 00:38:10.972770 | orchestrator |  "data_vg": "ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b" 2025-09-18 00:38:10.972781 | orchestrator |  }, 2025-09-18 00:38:10.972792 | orchestrator |  { 2025-09-18 00:38:10.972803 | orchestrator |  "data": "osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4", 2025-09-18 00:38:10.972813 | orchestrator |  "data_vg": "ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4" 2025-09-18 00:38:10.972824 | orchestrator |  } 2025-09-18 00:38:10.972835 | orchestrator |  ] 2025-09-18 00:38:10.972845 | orchestrator |  } 2025-09-18 00:38:10.972856 | orchestrator | } 2025-09-18 00:38:10.972867 | orchestrator | 2025-09-18 00:38:10.972878 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 00:38:10.972889 | orchestrator | Thursday 18 September 2025 00:38:08 +0000 (0:00:00.171) 0:00:26.460 **** 2025-09-18 00:38:10.972900 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 00:38:10.972910 | orchestrator | 2025-09-18 00:38:10.972921 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 00:38:10.972932 | orchestrator | 2025-09-18 00:38:10.972943 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:38:10.972954 | orchestrator | Thursday 18 September 2025 00:38:09 +0000 (0:00:00.932) 0:00:27.392 **** 2025-09-18 00:38:10.972964 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 00:38:10.972975 | orchestrator | 2025-09-18 00:38:10.972986 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:38:10.972996 | orchestrator | Thursday 18 September 2025 00:38:10 +0000 (0:00:00.334) 0:00:27.727 **** 2025-09-18 00:38:10.973015 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:38:10.973026 | orchestrator | 2025-09-18 00:38:10.973037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:10.973047 | orchestrator | Thursday 18 September 2025 00:38:10 +0000 (0:00:00.484) 0:00:28.211 **** 2025-09-18 00:38:10.973058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-18 00:38:10.973069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-18 00:38:10.973080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-18 00:38:10.973091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-18 00:38:10.973101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-18 00:38:10.973112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-18 00:38:10.973129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-18 00:38:18.905555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-18 00:38:18.905647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-18 00:38:18.905658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-18 00:38:18.905666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-18 00:38:18.905673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-18 00:38:18.905681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-18 00:38:18.905688 | orchestrator | 2025-09-18 00:38:18.905697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905705 | orchestrator | Thursday 18 September 2025 00:38:10 +0000 (0:00:00.331) 0:00:28.543 **** 2025-09-18 00:38:18.905713 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905721 | orchestrator | 2025-09-18 00:38:18.905728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905736 | orchestrator | Thursday 18 September 2025 00:38:11 +0000 (0:00:00.177) 0:00:28.720 **** 2025-09-18 00:38:18.905743 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905750 | orchestrator | 2025-09-18 00:38:18.905758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905765 | orchestrator | Thursday 18 September 2025 00:38:11 +0000 (0:00:00.189) 0:00:28.910 **** 2025-09-18 00:38:18.905772 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905780 | orchestrator | 2025-09-18 00:38:18.905787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905794 | orchestrator | Thursday 18 September 2025 00:38:11 +0000 (0:00:00.185) 0:00:29.095 **** 2025-09-18 00:38:18.905802 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905809 | orchestrator | 2025-09-18 00:38:18.905816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905823 | orchestrator | Thursday 18 September 2025 00:38:11 +0000 (0:00:00.175) 0:00:29.270 **** 2025-09-18 00:38:18.905831 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905838 | orchestrator | 2025-09-18 00:38:18.905845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905853 | orchestrator | Thursday 18 September 2025 00:38:11 +0000 (0:00:00.199) 0:00:29.470 **** 2025-09-18 00:38:18.905860 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905867 | orchestrator | 2025-09-18 00:38:18.905875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905882 | orchestrator | Thursday 18 September 2025 00:38:12 +0000 (0:00:00.168) 0:00:29.639 **** 2025-09-18 00:38:18.905890 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905917 | orchestrator | 2025-09-18 00:38:18.905925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905932 | orchestrator | Thursday 18 September 2025 00:38:12 +0000 (0:00:00.170) 0:00:29.809 **** 2025-09-18 00:38:18.905940 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.905947 | orchestrator | 2025-09-18 00:38:18.905968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.905976 | orchestrator | Thursday 18 September 2025 00:38:12 +0000 (0:00:00.186) 0:00:29.996 **** 2025-09-18 00:38:18.905984 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e) 2025-09-18 00:38:18.905993 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e) 2025-09-18 00:38:18.906000 | orchestrator | 2025-09-18 00:38:18.906007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.906055 | orchestrator | Thursday 18 September 2025 00:38:12 +0000 (0:00:00.514) 0:00:30.511 **** 2025-09-18 00:38:18.906064 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739) 2025-09-18 00:38:18.906071 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739) 2025-09-18 00:38:18.906079 | orchestrator | 2025-09-18 00:38:18.906086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.906093 | orchestrator | Thursday 18 September 2025 00:38:13 +0000 (0:00:00.689) 0:00:31.200 **** 2025-09-18 00:38:18.906100 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd) 2025-09-18 00:38:18.906108 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd) 2025-09-18 00:38:18.906115 | orchestrator | 2025-09-18 00:38:18.906122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.906129 | orchestrator | Thursday 18 September 2025 00:38:14 +0000 (0:00:00.415) 0:00:31.615 **** 2025-09-18 00:38:18.906136 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c) 2025-09-18 00:38:18.906144 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c) 2025-09-18 00:38:18.906151 | orchestrator | 2025-09-18 00:38:18.906158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:38:18.906165 | orchestrator | Thursday 18 September 2025 00:38:14 +0000 (0:00:00.396) 0:00:32.012 **** 2025-09-18 00:38:18.906172 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:38:18.906180 | orchestrator | 2025-09-18 00:38:18.906187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906194 | orchestrator | Thursday 18 September 2025 00:38:14 +0000 (0:00:00.357) 0:00:32.369 **** 2025-09-18 00:38:18.906215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-18 00:38:18.906222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-18 00:38:18.906229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-18 00:38:18.906236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-18 00:38:18.906243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-18 00:38:18.906267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-18 00:38:18.906275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-18 00:38:18.906282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-18 00:38:18.906289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-18 00:38:18.906316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-18 00:38:18.906323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-18 00:38:18.906330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-18 00:38:18.906337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-18 00:38:18.906345 | orchestrator | 2025-09-18 00:38:18.906352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906359 | orchestrator | Thursday 18 September 2025 00:38:15 +0000 (0:00:00.326) 0:00:32.696 **** 2025-09-18 00:38:18.906366 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906373 | orchestrator | 2025-09-18 00:38:18.906381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906388 | orchestrator | Thursday 18 September 2025 00:38:15 +0000 (0:00:00.190) 0:00:32.886 **** 2025-09-18 00:38:18.906395 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906402 | orchestrator | 2025-09-18 00:38:18.906409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906416 | orchestrator | Thursday 18 September 2025 00:38:15 +0000 (0:00:00.198) 0:00:33.085 **** 2025-09-18 00:38:18.906424 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906431 | orchestrator | 2025-09-18 00:38:18.906438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906445 | orchestrator | Thursday 18 September 2025 00:38:15 +0000 (0:00:00.192) 0:00:33.277 **** 2025-09-18 00:38:18.906453 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906460 | orchestrator | 2025-09-18 00:38:18.906467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906474 | orchestrator | Thursday 18 September 2025 00:38:15 +0000 (0:00:00.194) 0:00:33.471 **** 2025-09-18 00:38:18.906481 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906489 | orchestrator | 2025-09-18 00:38:18.906496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906503 | orchestrator | Thursday 18 September 2025 00:38:16 +0000 (0:00:00.194) 0:00:33.666 **** 2025-09-18 00:38:18.906510 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906517 | orchestrator | 2025-09-18 00:38:18.906525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906532 | orchestrator | Thursday 18 September 2025 00:38:16 +0000 (0:00:00.790) 0:00:34.456 **** 2025-09-18 00:38:18.906539 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906546 | orchestrator | 2025-09-18 00:38:18.906553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906561 | orchestrator | Thursday 18 September 2025 00:38:17 +0000 (0:00:00.242) 0:00:34.698 **** 2025-09-18 00:38:18.906568 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906575 | orchestrator | 2025-09-18 00:38:18.906582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906589 | orchestrator | Thursday 18 September 2025 00:38:17 +0000 (0:00:00.228) 0:00:34.927 **** 2025-09-18 00:38:18.906597 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-18 00:38:18.906604 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-18 00:38:18.906611 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-18 00:38:18.906618 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-18 00:38:18.906626 | orchestrator | 2025-09-18 00:38:18.906633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906640 | orchestrator | Thursday 18 September 2025 00:38:18 +0000 (0:00:00.692) 0:00:35.620 **** 2025-09-18 00:38:18.906647 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906654 | orchestrator | 2025-09-18 00:38:18.906662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906675 | orchestrator | Thursday 18 September 2025 00:38:18 +0000 (0:00:00.239) 0:00:35.860 **** 2025-09-18 00:38:18.906682 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906689 | orchestrator | 2025-09-18 00:38:18.906696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906703 | orchestrator | Thursday 18 September 2025 00:38:18 +0000 (0:00:00.218) 0:00:36.079 **** 2025-09-18 00:38:18.906710 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906718 | orchestrator | 2025-09-18 00:38:18.906725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:38:18.906732 | orchestrator | Thursday 18 September 2025 00:38:18 +0000 (0:00:00.212) 0:00:36.291 **** 2025-09-18 00:38:18.906744 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:18.906752 | orchestrator | 2025-09-18 00:38:18.906759 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 00:38:18.906770 | orchestrator | Thursday 18 September 2025 00:38:18 +0000 (0:00:00.190) 0:00:36.482 **** 2025-09-18 00:38:22.884470 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-18 00:38:22.884549 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-18 00:38:22.884562 | orchestrator | 2025-09-18 00:38:22.884574 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 00:38:22.884583 | orchestrator | Thursday 18 September 2025 00:38:19 +0000 (0:00:00.218) 0:00:36.700 **** 2025-09-18 00:38:22.884593 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884603 | orchestrator | 2025-09-18 00:38:22.884613 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 00:38:22.884622 | orchestrator | Thursday 18 September 2025 00:38:19 +0000 (0:00:00.131) 0:00:36.832 **** 2025-09-18 00:38:22.884632 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884641 | orchestrator | 2025-09-18 00:38:22.884651 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 00:38:22.884661 | orchestrator | Thursday 18 September 2025 00:38:19 +0000 (0:00:00.175) 0:00:37.007 **** 2025-09-18 00:38:22.884670 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884679 | orchestrator | 2025-09-18 00:38:22.884689 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 00:38:22.884698 | orchestrator | Thursday 18 September 2025 00:38:19 +0000 (0:00:00.139) 0:00:37.146 **** 2025-09-18 00:38:22.884708 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:38:22.884718 | orchestrator | 2025-09-18 00:38:22.884727 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 00:38:22.884737 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.482) 0:00:37.628 **** 2025-09-18 00:38:22.884747 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07829316-95ed-5d0c-8777-c74850e385f5'}}) 2025-09-18 00:38:22.884757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '48f1b2b0-1ebe-571e-b515-4e988bd235b0'}}) 2025-09-18 00:38:22.884767 | orchestrator | 2025-09-18 00:38:22.884776 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 00:38:22.884786 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.254) 0:00:37.883 **** 2025-09-18 00:38:22.884795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07829316-95ed-5d0c-8777-c74850e385f5'}})  2025-09-18 00:38:22.884806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '48f1b2b0-1ebe-571e-b515-4e988bd235b0'}})  2025-09-18 00:38:22.884815 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884825 | orchestrator | 2025-09-18 00:38:22.884848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 00:38:22.884859 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.162) 0:00:38.045 **** 2025-09-18 00:38:22.884868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07829316-95ed-5d0c-8777-c74850e385f5'}})  2025-09-18 00:38:22.884897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '48f1b2b0-1ebe-571e-b515-4e988bd235b0'}})  2025-09-18 00:38:22.884907 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884917 | orchestrator | 2025-09-18 00:38:22.884926 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 00:38:22.884936 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.158) 0:00:38.204 **** 2025-09-18 00:38:22.884945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07829316-95ed-5d0c-8777-c74850e385f5'}})  2025-09-18 00:38:22.884955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '48f1b2b0-1ebe-571e-b515-4e988bd235b0'}})  2025-09-18 00:38:22.884964 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.884973 | orchestrator | 2025-09-18 00:38:22.884983 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 00:38:22.884992 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.161) 0:00:38.365 **** 2025-09-18 00:38:22.885002 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:38:22.885011 | orchestrator | 2025-09-18 00:38:22.885020 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 00:38:22.885030 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.101) 0:00:38.467 **** 2025-09-18 00:38:22.885041 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:38:22.885052 | orchestrator | 2025-09-18 00:38:22.885063 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 00:38:22.885074 | orchestrator | Thursday 18 September 2025 00:38:20 +0000 (0:00:00.108) 0:00:38.576 **** 2025-09-18 00:38:22.885085 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885096 | orchestrator | 2025-09-18 00:38:22.885106 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 00:38:22.885118 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.109) 0:00:38.685 **** 2025-09-18 00:38:22.885128 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885140 | orchestrator | 2025-09-18 00:38:22.885151 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 00:38:22.885162 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.118) 0:00:38.804 **** 2025-09-18 00:38:22.885172 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885183 | orchestrator | 2025-09-18 00:38:22.885194 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 00:38:22.885205 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.113) 0:00:38.917 **** 2025-09-18 00:38:22.885216 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:38:22.885227 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:38:22.885238 | orchestrator |  "sdb": { 2025-09-18 00:38:22.885269 | orchestrator |  "osd_lvm_uuid": "07829316-95ed-5d0c-8777-c74850e385f5" 2025-09-18 00:38:22.885296 | orchestrator |  }, 2025-09-18 00:38:22.885308 | orchestrator |  "sdc": { 2025-09-18 00:38:22.885319 | orchestrator |  "osd_lvm_uuid": "48f1b2b0-1ebe-571e-b515-4e988bd235b0" 2025-09-18 00:38:22.885330 | orchestrator |  } 2025-09-18 00:38:22.885341 | orchestrator |  } 2025-09-18 00:38:22.885353 | orchestrator | } 2025-09-18 00:38:22.885364 | orchestrator | 2025-09-18 00:38:22.885375 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 00:38:22.885386 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.113) 0:00:39.030 **** 2025-09-18 00:38:22.885395 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885404 | orchestrator | 2025-09-18 00:38:22.885414 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 00:38:22.885423 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.108) 0:00:39.139 **** 2025-09-18 00:38:22.885432 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885442 | orchestrator | 2025-09-18 00:38:22.885451 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 00:38:22.885467 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.249) 0:00:39.389 **** 2025-09-18 00:38:22.885477 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:38:22.885486 | orchestrator | 2025-09-18 00:38:22.885496 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 00:38:22.885505 | orchestrator | Thursday 18 September 2025 00:38:21 +0000 (0:00:00.115) 0:00:39.505 **** 2025-09-18 00:38:22.885515 | orchestrator | changed: [testbed-node-5] => { 2025-09-18 00:38:22.885524 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 00:38:22.885534 | orchestrator |  "ceph_osd_devices": { 2025-09-18 00:38:22.885543 | orchestrator |  "sdb": { 2025-09-18 00:38:22.885553 | orchestrator |  "osd_lvm_uuid": "07829316-95ed-5d0c-8777-c74850e385f5" 2025-09-18 00:38:22.885562 | orchestrator |  }, 2025-09-18 00:38:22.885572 | orchestrator |  "sdc": { 2025-09-18 00:38:22.885581 | orchestrator |  "osd_lvm_uuid": "48f1b2b0-1ebe-571e-b515-4e988bd235b0" 2025-09-18 00:38:22.885591 | orchestrator |  } 2025-09-18 00:38:22.885600 | orchestrator |  }, 2025-09-18 00:38:22.885610 | orchestrator |  "lvm_volumes": [ 2025-09-18 00:38:22.885619 | orchestrator |  { 2025-09-18 00:38:22.885629 | orchestrator |  "data": "osd-block-07829316-95ed-5d0c-8777-c74850e385f5", 2025-09-18 00:38:22.885638 | orchestrator |  "data_vg": "ceph-07829316-95ed-5d0c-8777-c74850e385f5" 2025-09-18 00:38:22.885648 | orchestrator |  }, 2025-09-18 00:38:22.885657 | orchestrator |  { 2025-09-18 00:38:22.885667 | orchestrator |  "data": "osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0", 2025-09-18 00:38:22.885676 | orchestrator |  "data_vg": "ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0" 2025-09-18 00:38:22.885686 | orchestrator |  } 2025-09-18 00:38:22.885695 | orchestrator |  ] 2025-09-18 00:38:22.885705 | orchestrator |  } 2025-09-18 00:38:22.885718 | orchestrator | } 2025-09-18 00:38:22.885728 | orchestrator | 2025-09-18 00:38:22.885738 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 00:38:22.885748 | orchestrator | Thursday 18 September 2025 00:38:22 +0000 (0:00:00.183) 0:00:39.689 **** 2025-09-18 00:38:22.885757 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 00:38:22.885767 | orchestrator | 2025-09-18 00:38:22.885776 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:38:22.885792 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 00:38:22.885803 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 00:38:22.885813 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 00:38:22.885823 | orchestrator | 2025-09-18 00:38:22.885832 | orchestrator | 2025-09-18 00:38:22.885842 | orchestrator | 2025-09-18 00:38:22.885851 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:38:22.885861 | orchestrator | Thursday 18 September 2025 00:38:22 +0000 (0:00:00.766) 0:00:40.456 **** 2025-09-18 00:38:22.885870 | orchestrator | =============================================================================== 2025-09-18 00:38:22.885880 | orchestrator | Write configuration file ------------------------------------------------ 3.89s 2025-09-18 00:38:22.885889 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-09-18 00:38:22.885899 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-09-18 00:38:22.885908 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-09-18 00:38:22.885918 | orchestrator | Get initial list of available block devices ----------------------------- 0.95s 2025-09-18 00:38:22.885933 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-09-18 00:38:22.885942 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-09-18 00:38:22.885952 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.76s 2025-09-18 00:38:22.885961 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-18 00:38:22.885971 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-18 00:38:22.885980 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-09-18 00:38:22.885990 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-09-18 00:38:22.885999 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.69s 2025-09-18 00:38:22.886009 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-09-18 00:38:22.886072 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-18 00:38:23.146979 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-18 00:38:23.147059 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2025-09-18 00:38:23.147073 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.58s 2025-09-18 00:38:23.147084 | orchestrator | Print configuration data ------------------------------------------------ 0.58s 2025-09-18 00:38:23.147095 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-09-18 00:38:45.847715 | orchestrator | 2025-09-18 00:38:45 | INFO  | Task 140cdd98-7c9c-48a1-aea6-48105bb2384b (sync inventory) is running in background. Output coming soon. 2025-09-18 00:39:08.033186 | orchestrator | 2025-09-18 00:38:47 | INFO  | Starting group_vars file reorganization 2025-09-18 00:39:08.033240 | orchestrator | 2025-09-18 00:38:47 | INFO  | Moved 0 file(s) to their respective directories 2025-09-18 00:39:08.033247 | orchestrator | 2025-09-18 00:38:47 | INFO  | Group_vars file reorganization completed 2025-09-18 00:39:08.033251 | orchestrator | 2025-09-18 00:38:49 | INFO  | Starting variable preparation from inventory 2025-09-18 00:39:08.033255 | orchestrator | 2025-09-18 00:38:52 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-18 00:39:08.033259 | orchestrator | 2025-09-18 00:38:52 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-18 00:39:08.033263 | orchestrator | 2025-09-18 00:38:52 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-18 00:39:08.033267 | orchestrator | 2025-09-18 00:38:52 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-18 00:39:08.033271 | orchestrator | 2025-09-18 00:38:52 | INFO  | Variable preparation completed 2025-09-18 00:39:08.033275 | orchestrator | 2025-09-18 00:38:53 | INFO  | Starting inventory overwrite handling 2025-09-18 00:39:08.033279 | orchestrator | 2025-09-18 00:38:53 | INFO  | Handling group overwrites in 99-overwrite 2025-09-18 00:39:08.033283 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group frr:children from 60-generic 2025-09-18 00:39:08.033310 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group storage:children from 50-kolla 2025-09-18 00:39:08.033314 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-18 00:39:08.033318 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-18 00:39:08.033322 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-18 00:39:08.033326 | orchestrator | 2025-09-18 00:38:53 | INFO  | Handling group overwrites in 20-roles 2025-09-18 00:39:08.033330 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-18 00:39:08.033344 | orchestrator | 2025-09-18 00:38:53 | INFO  | Removed 6 group(s) in total 2025-09-18 00:39:08.033348 | orchestrator | 2025-09-18 00:38:53 | INFO  | Inventory overwrite handling completed 2025-09-18 00:39:08.033352 | orchestrator | 2025-09-18 00:38:54 | INFO  | Starting merge of inventory files 2025-09-18 00:39:08.033356 | orchestrator | 2025-09-18 00:38:54 | INFO  | Inventory files merged successfully 2025-09-18 00:39:08.033359 | orchestrator | 2025-09-18 00:38:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-18 00:39:08.033363 | orchestrator | 2025-09-18 00:39:06 | INFO  | Successfully wrote ClusterShell configuration 2025-09-18 00:39:08.033367 | orchestrator | [master 36e4384] 2025-09-18-00-39 2025-09-18 00:39:08.033371 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-18 00:39:10.169235 | orchestrator | 2025-09-18 00:39:10 | INFO  | Task 24e57feb-75f2-4909-89d0-861a7c0b2ac1 (ceph-create-lvm-devices) was prepared for execution. 2025-09-18 00:39:10.169365 | orchestrator | 2025-09-18 00:39:10 | INFO  | It takes a moment until task 24e57feb-75f2-4909-89d0-861a7c0b2ac1 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-18 00:39:21.965646 | orchestrator | 2025-09-18 00:39:21.965759 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 00:39:21.965776 | orchestrator | 2025-09-18 00:39:21.965788 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:39:21.965800 | orchestrator | Thursday 18 September 2025 00:39:14 +0000 (0:00:00.315) 0:00:00.315 **** 2025-09-18 00:39:21.965812 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 00:39:21.965823 | orchestrator | 2025-09-18 00:39:21.965834 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:39:21.965845 | orchestrator | Thursday 18 September 2025 00:39:14 +0000 (0:00:00.228) 0:00:00.544 **** 2025-09-18 00:39:21.965856 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:21.965868 | orchestrator | 2025-09-18 00:39:21.965879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.965890 | orchestrator | Thursday 18 September 2025 00:39:14 +0000 (0:00:00.216) 0:00:00.760 **** 2025-09-18 00:39:21.965901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-18 00:39:21.965914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-18 00:39:21.965925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-18 00:39:21.965936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-18 00:39:21.965947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-18 00:39:21.965957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-18 00:39:21.965968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-18 00:39:21.965979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-18 00:39:21.965990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-18 00:39:21.966001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-18 00:39:21.966011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-18 00:39:21.966085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-18 00:39:21.966097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-18 00:39:21.966107 | orchestrator | 2025-09-18 00:39:21.966118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966154 | orchestrator | Thursday 18 September 2025 00:39:15 +0000 (0:00:00.416) 0:00:01.176 **** 2025-09-18 00:39:21.966166 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966177 | orchestrator | 2025-09-18 00:39:21.966187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966216 | orchestrator | Thursday 18 September 2025 00:39:15 +0000 (0:00:00.506) 0:00:01.683 **** 2025-09-18 00:39:21.966228 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966239 | orchestrator | 2025-09-18 00:39:21.966250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966261 | orchestrator | Thursday 18 September 2025 00:39:15 +0000 (0:00:00.202) 0:00:01.885 **** 2025-09-18 00:39:21.966277 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966288 | orchestrator | 2025-09-18 00:39:21.966299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966336 | orchestrator | Thursday 18 September 2025 00:39:16 +0000 (0:00:00.202) 0:00:02.088 **** 2025-09-18 00:39:21.966347 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966358 | orchestrator | 2025-09-18 00:39:21.966369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966380 | orchestrator | Thursday 18 September 2025 00:39:16 +0000 (0:00:00.202) 0:00:02.291 **** 2025-09-18 00:39:21.966391 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966402 | orchestrator | 2025-09-18 00:39:21.966413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966424 | orchestrator | Thursday 18 September 2025 00:39:16 +0000 (0:00:00.201) 0:00:02.492 **** 2025-09-18 00:39:21.966434 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966445 | orchestrator | 2025-09-18 00:39:21.966456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966467 | orchestrator | Thursday 18 September 2025 00:39:16 +0000 (0:00:00.197) 0:00:02.689 **** 2025-09-18 00:39:21.966478 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966489 | orchestrator | 2025-09-18 00:39:21.966500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966511 | orchestrator | Thursday 18 September 2025 00:39:16 +0000 (0:00:00.205) 0:00:02.895 **** 2025-09-18 00:39:21.966521 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.966532 | orchestrator | 2025-09-18 00:39:21.966543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966554 | orchestrator | Thursday 18 September 2025 00:39:17 +0000 (0:00:00.217) 0:00:03.113 **** 2025-09-18 00:39:21.966565 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f) 2025-09-18 00:39:21.966577 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f) 2025-09-18 00:39:21.966588 | orchestrator | 2025-09-18 00:39:21.966599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966610 | orchestrator | Thursday 18 September 2025 00:39:17 +0000 (0:00:00.404) 0:00:03.518 **** 2025-09-18 00:39:21.966638 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca) 2025-09-18 00:39:21.966650 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca) 2025-09-18 00:39:21.966661 | orchestrator | 2025-09-18 00:39:21.966672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966682 | orchestrator | Thursday 18 September 2025 00:39:17 +0000 (0:00:00.407) 0:00:03.925 **** 2025-09-18 00:39:21.966693 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738) 2025-09-18 00:39:21.966704 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738) 2025-09-18 00:39:21.966715 | orchestrator | 2025-09-18 00:39:21.966725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966745 | orchestrator | Thursday 18 September 2025 00:39:18 +0000 (0:00:00.638) 0:00:04.564 **** 2025-09-18 00:39:21.966756 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97) 2025-09-18 00:39:21.966767 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97) 2025-09-18 00:39:21.966777 | orchestrator | 2025-09-18 00:39:21.966788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:21.966799 | orchestrator | Thursday 18 September 2025 00:39:19 +0000 (0:00:00.825) 0:00:05.389 **** 2025-09-18 00:39:21.966809 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:39:21.966820 | orchestrator | 2025-09-18 00:39:21.966831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.966842 | orchestrator | Thursday 18 September 2025 00:39:19 +0000 (0:00:00.322) 0:00:05.712 **** 2025-09-18 00:39:21.966852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-18 00:39:21.966863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-18 00:39:21.966873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-18 00:39:21.966884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-18 00:39:21.966894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-18 00:39:21.966905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-18 00:39:21.966916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-18 00:39:21.966926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-18 00:39:21.966937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-18 00:39:21.966947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-18 00:39:21.966958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-18 00:39:21.966968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-18 00:39:21.966978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-18 00:39:21.966989 | orchestrator | 2025-09-18 00:39:21.967000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967011 | orchestrator | Thursday 18 September 2025 00:39:20 +0000 (0:00:00.467) 0:00:06.179 **** 2025-09-18 00:39:21.967021 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967032 | orchestrator | 2025-09-18 00:39:21.967043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967053 | orchestrator | Thursday 18 September 2025 00:39:20 +0000 (0:00:00.249) 0:00:06.429 **** 2025-09-18 00:39:21.967064 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967075 | orchestrator | 2025-09-18 00:39:21.967085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967096 | orchestrator | Thursday 18 September 2025 00:39:20 +0000 (0:00:00.201) 0:00:06.631 **** 2025-09-18 00:39:21.967106 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967117 | orchestrator | 2025-09-18 00:39:21.967128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967138 | orchestrator | Thursday 18 September 2025 00:39:20 +0000 (0:00:00.201) 0:00:06.832 **** 2025-09-18 00:39:21.967149 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967160 | orchestrator | 2025-09-18 00:39:21.967170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967188 | orchestrator | Thursday 18 September 2025 00:39:21 +0000 (0:00:00.257) 0:00:07.090 **** 2025-09-18 00:39:21.967199 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967210 | orchestrator | 2025-09-18 00:39:21.967221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967231 | orchestrator | Thursday 18 September 2025 00:39:21 +0000 (0:00:00.214) 0:00:07.305 **** 2025-09-18 00:39:21.967242 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967252 | orchestrator | 2025-09-18 00:39:21.967278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967324 | orchestrator | Thursday 18 September 2025 00:39:21 +0000 (0:00:00.225) 0:00:07.530 **** 2025-09-18 00:39:21.967337 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:21.967348 | orchestrator | 2025-09-18 00:39:21.967358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:21.967369 | orchestrator | Thursday 18 September 2025 00:39:21 +0000 (0:00:00.236) 0:00:07.766 **** 2025-09-18 00:39:21.967387 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268418 | orchestrator | 2025-09-18 00:39:30.268495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:30.268506 | orchestrator | Thursday 18 September 2025 00:39:21 +0000 (0:00:00.235) 0:00:08.002 **** 2025-09-18 00:39:30.268515 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-18 00:39:30.268524 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-18 00:39:30.268532 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-18 00:39:30.268540 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-18 00:39:30.268548 | orchestrator | 2025-09-18 00:39:30.268556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:30.268564 | orchestrator | Thursday 18 September 2025 00:39:23 +0000 (0:00:01.492) 0:00:09.494 **** 2025-09-18 00:39:30.268572 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268580 | orchestrator | 2025-09-18 00:39:30.268587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:30.268595 | orchestrator | Thursday 18 September 2025 00:39:23 +0000 (0:00:00.259) 0:00:09.753 **** 2025-09-18 00:39:30.268603 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268611 | orchestrator | 2025-09-18 00:39:30.268619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:30.268627 | orchestrator | Thursday 18 September 2025 00:39:23 +0000 (0:00:00.257) 0:00:10.011 **** 2025-09-18 00:39:30.268635 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268643 | orchestrator | 2025-09-18 00:39:30.268651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:30.268659 | orchestrator | Thursday 18 September 2025 00:39:24 +0000 (0:00:00.234) 0:00:10.246 **** 2025-09-18 00:39:30.268667 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268675 | orchestrator | 2025-09-18 00:39:30.268683 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 00:39:30.268691 | orchestrator | Thursday 18 September 2025 00:39:24 +0000 (0:00:00.225) 0:00:10.471 **** 2025-09-18 00:39:30.268698 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268706 | orchestrator | 2025-09-18 00:39:30.268714 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 00:39:30.268722 | orchestrator | Thursday 18 September 2025 00:39:24 +0000 (0:00:00.137) 0:00:10.608 **** 2025-09-18 00:39:30.268730 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cde6920-619d-54be-8750-7c50463ca655'}}) 2025-09-18 00:39:30.268739 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ac78a0a-4049-5f74-bf32-d6052d628b7d'}}) 2025-09-18 00:39:30.268747 | orchestrator | 2025-09-18 00:39:30.268754 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 00:39:30.268762 | orchestrator | Thursday 18 September 2025 00:39:24 +0000 (0:00:00.194) 0:00:10.803 **** 2025-09-18 00:39:30.268771 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'}) 2025-09-18 00:39:30.268796 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'}) 2025-09-18 00:39:30.268804 | orchestrator | 2025-09-18 00:39:30.268824 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 00:39:30.268836 | orchestrator | Thursday 18 September 2025 00:39:26 +0000 (0:00:01.838) 0:00:12.642 **** 2025-09-18 00:39:30.268844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.268853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.268861 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268869 | orchestrator | 2025-09-18 00:39:30.268876 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 00:39:30.268884 | orchestrator | Thursday 18 September 2025 00:39:26 +0000 (0:00:00.133) 0:00:12.775 **** 2025-09-18 00:39:30.268892 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'}) 2025-09-18 00:39:30.268900 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'}) 2025-09-18 00:39:30.268908 | orchestrator | 2025-09-18 00:39:30.268915 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 00:39:30.268923 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:01.434) 0:00:14.210 **** 2025-09-18 00:39:30.268931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.268939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.268947 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268955 | orchestrator | 2025-09-18 00:39:30.268965 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 00:39:30.268974 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:00.146) 0:00:14.357 **** 2025-09-18 00:39:30.268983 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.268992 | orchestrator | 2025-09-18 00:39:30.269002 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 00:39:30.269023 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:00.129) 0:00:14.486 **** 2025-09-18 00:39:30.269033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269051 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269060 | orchestrator | 2025-09-18 00:39:30.269069 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 00:39:30.269078 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:00.275) 0:00:14.762 **** 2025-09-18 00:39:30.269088 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269096 | orchestrator | 2025-09-18 00:39:30.269106 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 00:39:30.269114 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:00.128) 0:00:14.890 **** 2025-09-18 00:39:30.269124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269148 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269156 | orchestrator | 2025-09-18 00:39:30.269164 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 00:39:30.269171 | orchestrator | Thursday 18 September 2025 00:39:28 +0000 (0:00:00.149) 0:00:15.039 **** 2025-09-18 00:39:30.269179 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269187 | orchestrator | 2025-09-18 00:39:30.269195 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 00:39:30.269202 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.162) 0:00:15.201 **** 2025-09-18 00:39:30.269210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269226 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269234 | orchestrator | 2025-09-18 00:39:30.269242 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 00:39:30.269250 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.145) 0:00:15.346 **** 2025-09-18 00:39:30.269258 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:30.269265 | orchestrator | 2025-09-18 00:39:30.269273 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 00:39:30.269281 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.171) 0:00:15.517 **** 2025-09-18 00:39:30.269293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269309 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269317 | orchestrator | 2025-09-18 00:39:30.269339 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 00:39:30.269348 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.173) 0:00:15.691 **** 2025-09-18 00:39:30.269356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269372 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269379 | orchestrator | 2025-09-18 00:39:30.269387 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 00:39:30.269395 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.181) 0:00:15.873 **** 2025-09-18 00:39:30.269403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:30.269411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:30.269419 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269427 | orchestrator | 2025-09-18 00:39:30.269435 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 00:39:30.269442 | orchestrator | Thursday 18 September 2025 00:39:29 +0000 (0:00:00.141) 0:00:16.014 **** 2025-09-18 00:39:30.269450 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269463 | orchestrator | 2025-09-18 00:39:30.269471 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 00:39:30.269479 | orchestrator | Thursday 18 September 2025 00:39:30 +0000 (0:00:00.139) 0:00:16.154 **** 2025-09-18 00:39:30.269486 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:30.269494 | orchestrator | 2025-09-18 00:39:30.269506 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 00:39:37.113324 | orchestrator | Thursday 18 September 2025 00:39:30 +0000 (0:00:00.156) 0:00:16.310 **** 2025-09-18 00:39:37.113473 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.113490 | orchestrator | 2025-09-18 00:39:37.113502 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 00:39:37.113514 | orchestrator | Thursday 18 September 2025 00:39:30 +0000 (0:00:00.144) 0:00:16.455 **** 2025-09-18 00:39:37.113525 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:39:37.113536 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 00:39:37.113548 | orchestrator | } 2025-09-18 00:39:37.113559 | orchestrator | 2025-09-18 00:39:37.113570 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 00:39:37.113581 | orchestrator | Thursday 18 September 2025 00:39:30 +0000 (0:00:00.551) 0:00:17.006 **** 2025-09-18 00:39:37.113593 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:39:37.113604 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 00:39:37.113615 | orchestrator | } 2025-09-18 00:39:37.113626 | orchestrator | 2025-09-18 00:39:37.113637 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 00:39:37.113649 | orchestrator | Thursday 18 September 2025 00:39:31 +0000 (0:00:00.164) 0:00:17.171 **** 2025-09-18 00:39:37.113660 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:39:37.113671 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 00:39:37.113682 | orchestrator | } 2025-09-18 00:39:37.113694 | orchestrator | 2025-09-18 00:39:37.113706 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 00:39:37.113717 | orchestrator | Thursday 18 September 2025 00:39:31 +0000 (0:00:00.172) 0:00:17.344 **** 2025-09-18 00:39:37.113728 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:37.113739 | orchestrator | 2025-09-18 00:39:37.113750 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 00:39:37.113761 | orchestrator | Thursday 18 September 2025 00:39:31 +0000 (0:00:00.702) 0:00:18.046 **** 2025-09-18 00:39:37.113772 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:37.113783 | orchestrator | 2025-09-18 00:39:37.113794 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 00:39:37.113805 | orchestrator | Thursday 18 September 2025 00:39:32 +0000 (0:00:00.582) 0:00:18.629 **** 2025-09-18 00:39:37.113816 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:37.113827 | orchestrator | 2025-09-18 00:39:37.113838 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 00:39:37.113849 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.571) 0:00:19.201 **** 2025-09-18 00:39:37.113860 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:37.113871 | orchestrator | 2025-09-18 00:39:37.113884 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 00:39:37.113897 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.175) 0:00:19.376 **** 2025-09-18 00:39:37.113909 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.113922 | orchestrator | 2025-09-18 00:39:37.113935 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 00:39:37.113947 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.142) 0:00:19.519 **** 2025-09-18 00:39:37.113960 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.113973 | orchestrator | 2025-09-18 00:39:37.113985 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 00:39:37.113998 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.170) 0:00:19.689 **** 2025-09-18 00:39:37.114011 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:39:37.114095 | orchestrator |  "vgs_report": { 2025-09-18 00:39:37.114110 | orchestrator |  "vg": [] 2025-09-18 00:39:37.114123 | orchestrator |  } 2025-09-18 00:39:37.114134 | orchestrator | } 2025-09-18 00:39:37.114145 | orchestrator | 2025-09-18 00:39:37.114156 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 00:39:37.114166 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.175) 0:00:19.864 **** 2025-09-18 00:39:37.114177 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114188 | orchestrator | 2025-09-18 00:39:37.114199 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 00:39:37.114210 | orchestrator | Thursday 18 September 2025 00:39:33 +0000 (0:00:00.171) 0:00:20.036 **** 2025-09-18 00:39:37.114221 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114231 | orchestrator | 2025-09-18 00:39:37.114242 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 00:39:37.114253 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.142) 0:00:20.178 **** 2025-09-18 00:39:37.114263 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114274 | orchestrator | 2025-09-18 00:39:37.114285 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 00:39:37.114295 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.336) 0:00:20.515 **** 2025-09-18 00:39:37.114306 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114317 | orchestrator | 2025-09-18 00:39:37.114328 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 00:39:37.114358 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.125) 0:00:20.641 **** 2025-09-18 00:39:37.114369 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114380 | orchestrator | 2025-09-18 00:39:37.114406 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 00:39:37.114418 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.131) 0:00:20.772 **** 2025-09-18 00:39:37.114429 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114439 | orchestrator | 2025-09-18 00:39:37.114450 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 00:39:37.114461 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.123) 0:00:20.896 **** 2025-09-18 00:39:37.114472 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114483 | orchestrator | 2025-09-18 00:39:37.114494 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 00:39:37.114504 | orchestrator | Thursday 18 September 2025 00:39:34 +0000 (0:00:00.118) 0:00:21.014 **** 2025-09-18 00:39:37.114515 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114526 | orchestrator | 2025-09-18 00:39:37.114537 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 00:39:37.114565 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.217) 0:00:21.231 **** 2025-09-18 00:39:37.114577 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114588 | orchestrator | 2025-09-18 00:39:37.114599 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 00:39:37.114610 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.136) 0:00:21.368 **** 2025-09-18 00:39:37.114621 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114632 | orchestrator | 2025-09-18 00:39:37.114643 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 00:39:37.114654 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.128) 0:00:21.496 **** 2025-09-18 00:39:37.114664 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114676 | orchestrator | 2025-09-18 00:39:37.114687 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 00:39:37.114697 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.130) 0:00:21.627 **** 2025-09-18 00:39:37.114708 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114719 | orchestrator | 2025-09-18 00:39:37.114738 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 00:39:37.114749 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.158) 0:00:21.785 **** 2025-09-18 00:39:37.114760 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114771 | orchestrator | 2025-09-18 00:39:37.114782 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 00:39:37.114793 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.115) 0:00:21.901 **** 2025-09-18 00:39:37.114804 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114815 | orchestrator | 2025-09-18 00:39:37.114826 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 00:39:37.114837 | orchestrator | Thursday 18 September 2025 00:39:35 +0000 (0:00:00.115) 0:00:22.016 **** 2025-09-18 00:39:37.114849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.114861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:37.114872 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114883 | orchestrator | 2025-09-18 00:39:37.114894 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 00:39:37.114905 | orchestrator | Thursday 18 September 2025 00:39:36 +0000 (0:00:00.397) 0:00:22.414 **** 2025-09-18 00:39:37.114916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.114927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:37.114938 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.114949 | orchestrator | 2025-09-18 00:39:37.114960 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 00:39:37.114971 | orchestrator | Thursday 18 September 2025 00:39:36 +0000 (0:00:00.162) 0:00:22.576 **** 2025-09-18 00:39:37.114986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.114997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:37.115008 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.115019 | orchestrator | 2025-09-18 00:39:37.115030 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 00:39:37.115040 | orchestrator | Thursday 18 September 2025 00:39:36 +0000 (0:00:00.145) 0:00:22.722 **** 2025-09-18 00:39:37.115051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.115062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:37.115073 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.115084 | orchestrator | 2025-09-18 00:39:37.115095 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 00:39:37.115106 | orchestrator | Thursday 18 September 2025 00:39:36 +0000 (0:00:00.137) 0:00:22.859 **** 2025-09-18 00:39:37.115117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.115128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:37.115139 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:37.115156 | orchestrator | 2025-09-18 00:39:37.115167 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 00:39:37.115178 | orchestrator | Thursday 18 September 2025 00:39:36 +0000 (0:00:00.163) 0:00:23.023 **** 2025-09-18 00:39:37.115189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:37.115205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.951322 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.951487 | orchestrator | 2025-09-18 00:39:42.951506 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 00:39:42.951520 | orchestrator | Thursday 18 September 2025 00:39:37 +0000 (0:00:00.133) 0:00:23.156 **** 2025-09-18 00:39:42.951532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:42.951545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.951556 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.951567 | orchestrator | 2025-09-18 00:39:42.951578 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 00:39:42.951589 | orchestrator | Thursday 18 September 2025 00:39:37 +0000 (0:00:00.137) 0:00:23.294 **** 2025-09-18 00:39:42.951600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:42.951612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.951623 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.951634 | orchestrator | 2025-09-18 00:39:42.951645 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 00:39:42.951656 | orchestrator | Thursday 18 September 2025 00:39:37 +0000 (0:00:00.136) 0:00:23.430 **** 2025-09-18 00:39:42.951667 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:42.951679 | orchestrator | 2025-09-18 00:39:42.951690 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 00:39:42.951701 | orchestrator | Thursday 18 September 2025 00:39:37 +0000 (0:00:00.556) 0:00:23.987 **** 2025-09-18 00:39:42.951712 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:42.951723 | orchestrator | 2025-09-18 00:39:42.951733 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 00:39:42.951744 | orchestrator | Thursday 18 September 2025 00:39:38 +0000 (0:00:00.534) 0:00:24.521 **** 2025-09-18 00:39:42.951755 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:39:42.951766 | orchestrator | 2025-09-18 00:39:42.951777 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 00:39:42.951788 | orchestrator | Thursday 18 September 2025 00:39:38 +0000 (0:00:00.174) 0:00:24.696 **** 2025-09-18 00:39:42.951799 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'vg_name': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'}) 2025-09-18 00:39:42.951811 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'vg_name': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'}) 2025-09-18 00:39:42.951822 | orchestrator | 2025-09-18 00:39:42.951833 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 00:39:42.951844 | orchestrator | Thursday 18 September 2025 00:39:38 +0000 (0:00:00.201) 0:00:24.897 **** 2025-09-18 00:39:42.951855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:42.951895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.951909 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.951922 | orchestrator | 2025-09-18 00:39:42.951935 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 00:39:42.951947 | orchestrator | Thursday 18 September 2025 00:39:39 +0000 (0:00:00.450) 0:00:25.347 **** 2025-09-18 00:39:42.951959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:42.951972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.951984 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.951996 | orchestrator | 2025-09-18 00:39:42.952010 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 00:39:42.952022 | orchestrator | Thursday 18 September 2025 00:39:39 +0000 (0:00:00.196) 0:00:25.544 **** 2025-09-18 00:39:42.952035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'})  2025-09-18 00:39:42.952048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'})  2025-09-18 00:39:42.952060 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:39:42.952072 | orchestrator | 2025-09-18 00:39:42.952085 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 00:39:42.952097 | orchestrator | Thursday 18 September 2025 00:39:39 +0000 (0:00:00.181) 0:00:25.725 **** 2025-09-18 00:39:42.952110 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 00:39:42.952123 | orchestrator |  "lvm_report": { 2025-09-18 00:39:42.952137 | orchestrator |  "lv": [ 2025-09-18 00:39:42.952149 | orchestrator |  { 2025-09-18 00:39:42.952179 | orchestrator |  "lv_name": "osd-block-0cde6920-619d-54be-8750-7c50463ca655", 2025-09-18 00:39:42.952194 | orchestrator |  "vg_name": "ceph-0cde6920-619d-54be-8750-7c50463ca655" 2025-09-18 00:39:42.952207 | orchestrator |  }, 2025-09-18 00:39:42.952219 | orchestrator |  { 2025-09-18 00:39:42.952229 | orchestrator |  "lv_name": "osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d", 2025-09-18 00:39:42.952240 | orchestrator |  "vg_name": "ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d" 2025-09-18 00:39:42.952251 | orchestrator |  } 2025-09-18 00:39:42.952262 | orchestrator |  ], 2025-09-18 00:39:42.952272 | orchestrator |  "pv": [ 2025-09-18 00:39:42.952283 | orchestrator |  { 2025-09-18 00:39:42.952294 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 00:39:42.952304 | orchestrator |  "vg_name": "ceph-0cde6920-619d-54be-8750-7c50463ca655" 2025-09-18 00:39:42.952315 | orchestrator |  }, 2025-09-18 00:39:42.952326 | orchestrator |  { 2025-09-18 00:39:42.952336 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 00:39:42.952367 | orchestrator |  "vg_name": "ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d" 2025-09-18 00:39:42.952378 | orchestrator |  } 2025-09-18 00:39:42.952389 | orchestrator |  ] 2025-09-18 00:39:42.952400 | orchestrator |  } 2025-09-18 00:39:42.952411 | orchestrator | } 2025-09-18 00:39:42.952422 | orchestrator | 2025-09-18 00:39:42.952432 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 00:39:42.952443 | orchestrator | 2025-09-18 00:39:42.952454 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:39:42.952465 | orchestrator | Thursday 18 September 2025 00:39:39 +0000 (0:00:00.287) 0:00:26.013 **** 2025-09-18 00:39:42.952476 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 00:39:42.952496 | orchestrator | 2025-09-18 00:39:42.952507 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:39:42.952518 | orchestrator | Thursday 18 September 2025 00:39:40 +0000 (0:00:00.270) 0:00:26.283 **** 2025-09-18 00:39:42.952529 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:42.952540 | orchestrator | 2025-09-18 00:39:42.952551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952561 | orchestrator | Thursday 18 September 2025 00:39:40 +0000 (0:00:00.243) 0:00:26.527 **** 2025-09-18 00:39:42.952589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-18 00:39:42.952601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-18 00:39:42.952612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-18 00:39:42.952622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-18 00:39:42.952633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-18 00:39:42.952644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-18 00:39:42.952654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-18 00:39:42.952671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-18 00:39:42.952681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-18 00:39:42.952692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-18 00:39:42.952703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-18 00:39:42.952714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-18 00:39:42.952725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-18 00:39:42.952735 | orchestrator | 2025-09-18 00:39:42.952746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952757 | orchestrator | Thursday 18 September 2025 00:39:40 +0000 (0:00:00.438) 0:00:26.966 **** 2025-09-18 00:39:42.952768 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952779 | orchestrator | 2025-09-18 00:39:42.952790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952800 | orchestrator | Thursday 18 September 2025 00:39:41 +0000 (0:00:00.201) 0:00:27.168 **** 2025-09-18 00:39:42.952811 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952822 | orchestrator | 2025-09-18 00:39:42.952833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952843 | orchestrator | Thursday 18 September 2025 00:39:41 +0000 (0:00:00.203) 0:00:27.371 **** 2025-09-18 00:39:42.952854 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952865 | orchestrator | 2025-09-18 00:39:42.952876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952886 | orchestrator | Thursday 18 September 2025 00:39:42 +0000 (0:00:00.752) 0:00:28.124 **** 2025-09-18 00:39:42.952897 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952908 | orchestrator | 2025-09-18 00:39:42.952919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952929 | orchestrator | Thursday 18 September 2025 00:39:42 +0000 (0:00:00.219) 0:00:28.343 **** 2025-09-18 00:39:42.952940 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952951 | orchestrator | 2025-09-18 00:39:42.952961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.952972 | orchestrator | Thursday 18 September 2025 00:39:42 +0000 (0:00:00.218) 0:00:28.562 **** 2025-09-18 00:39:42.952983 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.952994 | orchestrator | 2025-09-18 00:39:42.953012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:42.953023 | orchestrator | Thursday 18 September 2025 00:39:42 +0000 (0:00:00.210) 0:00:28.772 **** 2025-09-18 00:39:42.953034 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:42.953045 | orchestrator | 2025-09-18 00:39:42.953063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.449817 | orchestrator | Thursday 18 September 2025 00:39:42 +0000 (0:00:00.214) 0:00:28.987 **** 2025-09-18 00:39:53.449941 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.449960 | orchestrator | 2025-09-18 00:39:53.449973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.449985 | orchestrator | Thursday 18 September 2025 00:39:43 +0000 (0:00:00.220) 0:00:29.207 **** 2025-09-18 00:39:53.449996 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e) 2025-09-18 00:39:53.450009 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e) 2025-09-18 00:39:53.450068 | orchestrator | 2025-09-18 00:39:53.450823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.450910 | orchestrator | Thursday 18 September 2025 00:39:43 +0000 (0:00:00.435) 0:00:29.643 **** 2025-09-18 00:39:53.450924 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8) 2025-09-18 00:39:53.450936 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8) 2025-09-18 00:39:53.450947 | orchestrator | 2025-09-18 00:39:53.450958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.450969 | orchestrator | Thursday 18 September 2025 00:39:44 +0000 (0:00:00.485) 0:00:30.128 **** 2025-09-18 00:39:53.450980 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31) 2025-09-18 00:39:53.450991 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31) 2025-09-18 00:39:53.451002 | orchestrator | 2025-09-18 00:39:53.451013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.451024 | orchestrator | Thursday 18 September 2025 00:39:44 +0000 (0:00:00.445) 0:00:30.574 **** 2025-09-18 00:39:53.451035 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d) 2025-09-18 00:39:53.451046 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d) 2025-09-18 00:39:53.451057 | orchestrator | 2025-09-18 00:39:53.451067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:39:53.451078 | orchestrator | Thursday 18 September 2025 00:39:44 +0000 (0:00:00.444) 0:00:31.018 **** 2025-09-18 00:39:53.451089 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:39:53.451100 | orchestrator | 2025-09-18 00:39:53.451111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451122 | orchestrator | Thursday 18 September 2025 00:39:45 +0000 (0:00:00.347) 0:00:31.366 **** 2025-09-18 00:39:53.451133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-18 00:39:53.451162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-18 00:39:53.451173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-18 00:39:53.451184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-18 00:39:53.451195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-18 00:39:53.451206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-18 00:39:53.451216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-18 00:39:53.451248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-18 00:39:53.451259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-18 00:39:53.451270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-18 00:39:53.451280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-18 00:39:53.451291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-18 00:39:53.451301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-18 00:39:53.451312 | orchestrator | 2025-09-18 00:39:53.451322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451333 | orchestrator | Thursday 18 September 2025 00:39:45 +0000 (0:00:00.679) 0:00:32.045 **** 2025-09-18 00:39:53.451344 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451381 | orchestrator | 2025-09-18 00:39:53.451393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451404 | orchestrator | Thursday 18 September 2025 00:39:46 +0000 (0:00:00.236) 0:00:32.282 **** 2025-09-18 00:39:53.451415 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451426 | orchestrator | 2025-09-18 00:39:53.451437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451448 | orchestrator | Thursday 18 September 2025 00:39:46 +0000 (0:00:00.204) 0:00:32.486 **** 2025-09-18 00:39:53.451459 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451469 | orchestrator | 2025-09-18 00:39:53.451480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451491 | orchestrator | Thursday 18 September 2025 00:39:46 +0000 (0:00:00.212) 0:00:32.699 **** 2025-09-18 00:39:53.451502 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451512 | orchestrator | 2025-09-18 00:39:53.451624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451639 | orchestrator | Thursday 18 September 2025 00:39:46 +0000 (0:00:00.211) 0:00:32.911 **** 2025-09-18 00:39:53.451650 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451660 | orchestrator | 2025-09-18 00:39:53.451671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451682 | orchestrator | Thursday 18 September 2025 00:39:47 +0000 (0:00:00.220) 0:00:33.131 **** 2025-09-18 00:39:53.451693 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451772 | orchestrator | 2025-09-18 00:39:53.451784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451795 | orchestrator | Thursday 18 September 2025 00:39:47 +0000 (0:00:00.223) 0:00:33.354 **** 2025-09-18 00:39:53.451806 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451817 | orchestrator | 2025-09-18 00:39:53.451828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451838 | orchestrator | Thursday 18 September 2025 00:39:47 +0000 (0:00:00.182) 0:00:33.537 **** 2025-09-18 00:39:53.451849 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451860 | orchestrator | 2025-09-18 00:39:53.451871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451881 | orchestrator | Thursday 18 September 2025 00:39:47 +0000 (0:00:00.199) 0:00:33.737 **** 2025-09-18 00:39:53.451892 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-18 00:39:53.451903 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-18 00:39:53.451914 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-18 00:39:53.451925 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-18 00:39:53.451935 | orchestrator | 2025-09-18 00:39:53.451946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.451957 | orchestrator | Thursday 18 September 2025 00:39:48 +0000 (0:00:00.855) 0:00:34.593 **** 2025-09-18 00:39:53.451981 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.451991 | orchestrator | 2025-09-18 00:39:53.452002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.452013 | orchestrator | Thursday 18 September 2025 00:39:48 +0000 (0:00:00.212) 0:00:34.805 **** 2025-09-18 00:39:53.452024 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.452034 | orchestrator | 2025-09-18 00:39:53.452045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.452056 | orchestrator | Thursday 18 September 2025 00:39:48 +0000 (0:00:00.201) 0:00:35.007 **** 2025-09-18 00:39:53.452067 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.452077 | orchestrator | 2025-09-18 00:39:53.452088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:39:53.452098 | orchestrator | Thursday 18 September 2025 00:39:49 +0000 (0:00:00.701) 0:00:35.708 **** 2025-09-18 00:39:53.452109 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.452120 | orchestrator | 2025-09-18 00:39:53.452131 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 00:39:53.452141 | orchestrator | Thursday 18 September 2025 00:39:49 +0000 (0:00:00.232) 0:00:35.941 **** 2025-09-18 00:39:53.452153 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.452163 | orchestrator | 2025-09-18 00:39:53.452174 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 00:39:53.452185 | orchestrator | Thursday 18 September 2025 00:39:50 +0000 (0:00:00.148) 0:00:36.089 **** 2025-09-18 00:39:53.452196 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b959ef4-2353-55d9-9e37-ea43ed82416b'}}) 2025-09-18 00:39:53.452207 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '652709a4-002d-5e7f-9b0a-9f9e264992f4'}}) 2025-09-18 00:39:53.452218 | orchestrator | 2025-09-18 00:39:53.452228 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 00:39:53.452239 | orchestrator | Thursday 18 September 2025 00:39:50 +0000 (0:00:00.209) 0:00:36.298 **** 2025-09-18 00:39:53.452251 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'}) 2025-09-18 00:39:53.452263 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'}) 2025-09-18 00:39:53.452274 | orchestrator | 2025-09-18 00:39:53.452284 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 00:39:53.452295 | orchestrator | Thursday 18 September 2025 00:39:52 +0000 (0:00:01.811) 0:00:38.110 **** 2025-09-18 00:39:53.452306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:53.452318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:53.452328 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:53.452339 | orchestrator | 2025-09-18 00:39:53.452350 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 00:39:53.452384 | orchestrator | Thursday 18 September 2025 00:39:52 +0000 (0:00:00.139) 0:00:38.249 **** 2025-09-18 00:39:53.452395 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'}) 2025-09-18 00:39:53.452406 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'}) 2025-09-18 00:39:53.452416 | orchestrator | 2025-09-18 00:39:53.452437 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 00:39:58.535287 | orchestrator | Thursday 18 September 2025 00:39:53 +0000 (0:00:01.235) 0:00:39.485 **** 2025-09-18 00:39:58.535423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.535454 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535466 | orchestrator | 2025-09-18 00:39:58.535478 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 00:39:58.535489 | orchestrator | Thursday 18 September 2025 00:39:53 +0000 (0:00:00.148) 0:00:39.633 **** 2025-09-18 00:39:58.535500 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535511 | orchestrator | 2025-09-18 00:39:58.535521 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 00:39:58.535532 | orchestrator | Thursday 18 September 2025 00:39:53 +0000 (0:00:00.126) 0:00:39.760 **** 2025-09-18 00:39:58.535544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.535581 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535592 | orchestrator | 2025-09-18 00:39:58.535603 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 00:39:58.535613 | orchestrator | Thursday 18 September 2025 00:39:53 +0000 (0:00:00.146) 0:00:39.906 **** 2025-09-18 00:39:58.535624 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535635 | orchestrator | 2025-09-18 00:39:58.535646 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 00:39:58.535657 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.146) 0:00:40.052 **** 2025-09-18 00:39:58.535668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.535689 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535700 | orchestrator | 2025-09-18 00:39:58.535711 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 00:39:58.535722 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.146) 0:00:40.199 **** 2025-09-18 00:39:58.535737 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535748 | orchestrator | 2025-09-18 00:39:58.535759 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 00:39:58.535770 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.243) 0:00:40.442 **** 2025-09-18 00:39:58.535781 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.535802 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535813 | orchestrator | 2025-09-18 00:39:58.535824 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 00:39:58.535835 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.122) 0:00:40.564 **** 2025-09-18 00:39:58.535848 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:58.535861 | orchestrator | 2025-09-18 00:39:58.535874 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 00:39:58.535886 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.117) 0:00:40.682 **** 2025-09-18 00:39:58.535906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.535933 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.535945 | orchestrator | 2025-09-18 00:39:58.535958 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 00:39:58.535971 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.158) 0:00:40.841 **** 2025-09-18 00:39:58.535983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.535996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.536008 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536021 | orchestrator | 2025-09-18 00:39:58.536033 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 00:39:58.536046 | orchestrator | Thursday 18 September 2025 00:39:54 +0000 (0:00:00.155) 0:00:40.997 **** 2025-09-18 00:39:58.536074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:39:58.536088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:39:58.536100 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536113 | orchestrator | 2025-09-18 00:39:58.536126 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 00:39:58.536138 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.127) 0:00:41.124 **** 2025-09-18 00:39:58.536149 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536159 | orchestrator | 2025-09-18 00:39:58.536170 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 00:39:58.536181 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.122) 0:00:41.246 **** 2025-09-18 00:39:58.536192 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536202 | orchestrator | 2025-09-18 00:39:58.536213 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 00:39:58.536224 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.124) 0:00:41.371 **** 2025-09-18 00:39:58.536235 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536245 | orchestrator | 2025-09-18 00:39:58.536256 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 00:39:58.536267 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.119) 0:00:41.490 **** 2025-09-18 00:39:58.536278 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:39:58.536289 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 00:39:58.536300 | orchestrator | } 2025-09-18 00:39:58.536311 | orchestrator | 2025-09-18 00:39:58.536321 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 00:39:58.536332 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.131) 0:00:41.621 **** 2025-09-18 00:39:58.536343 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:39:58.536354 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 00:39:58.536389 | orchestrator | } 2025-09-18 00:39:58.536409 | orchestrator | 2025-09-18 00:39:58.536427 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 00:39:58.536446 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.133) 0:00:41.754 **** 2025-09-18 00:39:58.536466 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:39:58.536486 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 00:39:58.536506 | orchestrator | } 2025-09-18 00:39:58.536517 | orchestrator | 2025-09-18 00:39:58.536528 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 00:39:58.536539 | orchestrator | Thursday 18 September 2025 00:39:55 +0000 (0:00:00.147) 0:00:41.902 **** 2025-09-18 00:39:58.536550 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:58.536560 | orchestrator | 2025-09-18 00:39:58.536571 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 00:39:58.536582 | orchestrator | Thursday 18 September 2025 00:39:56 +0000 (0:00:00.629) 0:00:42.531 **** 2025-09-18 00:39:58.536598 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:58.536609 | orchestrator | 2025-09-18 00:39:58.536620 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 00:39:58.536631 | orchestrator | Thursday 18 September 2025 00:39:56 +0000 (0:00:00.475) 0:00:43.006 **** 2025-09-18 00:39:58.536642 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:58.536652 | orchestrator | 2025-09-18 00:39:58.536663 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 00:39:58.536674 | orchestrator | Thursday 18 September 2025 00:39:57 +0000 (0:00:00.506) 0:00:43.513 **** 2025-09-18 00:39:58.536684 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:39:58.536695 | orchestrator | 2025-09-18 00:39:58.536706 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 00:39:58.536716 | orchestrator | Thursday 18 September 2025 00:39:57 +0000 (0:00:00.152) 0:00:43.665 **** 2025-09-18 00:39:58.536727 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536738 | orchestrator | 2025-09-18 00:39:58.536748 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 00:39:58.536759 | orchestrator | Thursday 18 September 2025 00:39:57 +0000 (0:00:00.123) 0:00:43.789 **** 2025-09-18 00:39:58.536770 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536781 | orchestrator | 2025-09-18 00:39:58.536791 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 00:39:58.536802 | orchestrator | Thursday 18 September 2025 00:39:57 +0000 (0:00:00.099) 0:00:43.889 **** 2025-09-18 00:39:58.536812 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:39:58.536823 | orchestrator |  "vgs_report": { 2025-09-18 00:39:58.536835 | orchestrator |  "vg": [] 2025-09-18 00:39:58.536846 | orchestrator |  } 2025-09-18 00:39:58.536857 | orchestrator | } 2025-09-18 00:39:58.536868 | orchestrator | 2025-09-18 00:39:58.536879 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 00:39:58.536889 | orchestrator | Thursday 18 September 2025 00:39:57 +0000 (0:00:00.154) 0:00:44.044 **** 2025-09-18 00:39:58.536900 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536911 | orchestrator | 2025-09-18 00:39:58.536921 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 00:39:58.536932 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.139) 0:00:44.183 **** 2025-09-18 00:39:58.536943 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536954 | orchestrator | 2025-09-18 00:39:58.536964 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 00:39:58.536975 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.127) 0:00:44.311 **** 2025-09-18 00:39:58.536986 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.536997 | orchestrator | 2025-09-18 00:39:58.537008 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 00:39:58.537019 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.126) 0:00:44.437 **** 2025-09-18 00:39:58.537029 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:39:58.537040 | orchestrator | 2025-09-18 00:39:58.537051 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 00:39:58.537070 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.139) 0:00:44.577 **** 2025-09-18 00:40:03.462649 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.462773 | orchestrator | 2025-09-18 00:40:03.462811 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 00:40:03.462825 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.135) 0:00:44.712 **** 2025-09-18 00:40:03.462838 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.462859 | orchestrator | 2025-09-18 00:40:03.462871 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 00:40:03.462883 | orchestrator | Thursday 18 September 2025 00:39:58 +0000 (0:00:00.293) 0:00:45.006 **** 2025-09-18 00:40:03.462894 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.462906 | orchestrator | 2025-09-18 00:40:03.462917 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 00:40:03.462928 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.133) 0:00:45.139 **** 2025-09-18 00:40:03.462939 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.462950 | orchestrator | 2025-09-18 00:40:03.462961 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 00:40:03.462972 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.131) 0:00:45.270 **** 2025-09-18 00:40:03.462983 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.462994 | orchestrator | 2025-09-18 00:40:03.463005 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 00:40:03.463016 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.138) 0:00:45.409 **** 2025-09-18 00:40:03.463027 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463046 | orchestrator | 2025-09-18 00:40:03.463064 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 00:40:03.463083 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.135) 0:00:45.544 **** 2025-09-18 00:40:03.463099 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463117 | orchestrator | 2025-09-18 00:40:03.463135 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 00:40:03.463152 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.136) 0:00:45.681 **** 2025-09-18 00:40:03.463169 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463186 | orchestrator | 2025-09-18 00:40:03.463207 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 00:40:03.463225 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.163) 0:00:45.845 **** 2025-09-18 00:40:03.463244 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463264 | orchestrator | 2025-09-18 00:40:03.463284 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 00:40:03.463304 | orchestrator | Thursday 18 September 2025 00:39:59 +0000 (0:00:00.145) 0:00:45.991 **** 2025-09-18 00:40:03.463316 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463329 | orchestrator | 2025-09-18 00:40:03.463342 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 00:40:03.463354 | orchestrator | Thursday 18 September 2025 00:40:00 +0000 (0:00:00.144) 0:00:46.135 **** 2025-09-18 00:40:03.463450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463480 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463491 | orchestrator | 2025-09-18 00:40:03.463502 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 00:40:03.463513 | orchestrator | Thursday 18 September 2025 00:40:00 +0000 (0:00:00.166) 0:00:46.302 **** 2025-09-18 00:40:03.463524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463557 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463567 | orchestrator | 2025-09-18 00:40:03.463578 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 00:40:03.463589 | orchestrator | Thursday 18 September 2025 00:40:00 +0000 (0:00:00.169) 0:00:46.471 **** 2025-09-18 00:40:03.463600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463611 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463623 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463634 | orchestrator | 2025-09-18 00:40:03.463645 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 00:40:03.463656 | orchestrator | Thursday 18 September 2025 00:40:00 +0000 (0:00:00.174) 0:00:46.645 **** 2025-09-18 00:40:03.463667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463678 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463689 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463699 | orchestrator | 2025-09-18 00:40:03.463711 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 00:40:03.463743 | orchestrator | Thursday 18 September 2025 00:40:01 +0000 (0:00:00.525) 0:00:47.171 **** 2025-09-18 00:40:03.463754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463776 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463787 | orchestrator | 2025-09-18 00:40:03.463798 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 00:40:03.463809 | orchestrator | Thursday 18 September 2025 00:40:01 +0000 (0:00:00.161) 0:00:47.332 **** 2025-09-18 00:40:03.463820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463841 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463853 | orchestrator | 2025-09-18 00:40:03.463864 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 00:40:03.463875 | orchestrator | Thursday 18 September 2025 00:40:01 +0000 (0:00:00.164) 0:00:47.497 **** 2025-09-18 00:40:03.463886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463907 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463918 | orchestrator | 2025-09-18 00:40:03.463929 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 00:40:03.463940 | orchestrator | Thursday 18 September 2025 00:40:01 +0000 (0:00:00.165) 0:00:47.662 **** 2025-09-18 00:40:03.463950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.463968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.463979 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.463990 | orchestrator | 2025-09-18 00:40:03.464001 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 00:40:03.464050 | orchestrator | Thursday 18 September 2025 00:40:01 +0000 (0:00:00.163) 0:00:47.825 **** 2025-09-18 00:40:03.464062 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:40:03.464073 | orchestrator | 2025-09-18 00:40:03.464084 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 00:40:03.464094 | orchestrator | Thursday 18 September 2025 00:40:02 +0000 (0:00:00.484) 0:00:48.310 **** 2025-09-18 00:40:03.464105 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:40:03.464116 | orchestrator | 2025-09-18 00:40:03.464126 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 00:40:03.464137 | orchestrator | Thursday 18 September 2025 00:40:02 +0000 (0:00:00.499) 0:00:48.809 **** 2025-09-18 00:40:03.464148 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:40:03.464158 | orchestrator | 2025-09-18 00:40:03.464169 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 00:40:03.464180 | orchestrator | Thursday 18 September 2025 00:40:02 +0000 (0:00:00.155) 0:00:48.965 **** 2025-09-18 00:40:03.464191 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'vg_name': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'}) 2025-09-18 00:40:03.464203 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'vg_name': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'}) 2025-09-18 00:40:03.464214 | orchestrator | 2025-09-18 00:40:03.464224 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 00:40:03.464235 | orchestrator | Thursday 18 September 2025 00:40:03 +0000 (0:00:00.183) 0:00:49.148 **** 2025-09-18 00:40:03.464246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.464257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.464267 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:03.464278 | orchestrator | 2025-09-18 00:40:03.464289 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 00:40:03.464299 | orchestrator | Thursday 18 September 2025 00:40:03 +0000 (0:00:00.160) 0:00:49.309 **** 2025-09-18 00:40:03.464310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:03.464321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:03.464338 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:09.453901 | orchestrator | 2025-09-18 00:40:09.453996 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 00:40:09.454065 | orchestrator | Thursday 18 September 2025 00:40:03 +0000 (0:00:00.194) 0:00:49.503 **** 2025-09-18 00:40:09.454083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'})  2025-09-18 00:40:09.454095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'})  2025-09-18 00:40:09.454106 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:09.454118 | orchestrator | 2025-09-18 00:40:09.454129 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 00:40:09.454140 | orchestrator | Thursday 18 September 2025 00:40:03 +0000 (0:00:00.158) 0:00:49.661 **** 2025-09-18 00:40:09.454171 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 00:40:09.454183 | orchestrator |  "lvm_report": { 2025-09-18 00:40:09.454196 | orchestrator |  "lv": [ 2025-09-18 00:40:09.454207 | orchestrator |  { 2025-09-18 00:40:09.454219 | orchestrator |  "lv_name": "osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4", 2025-09-18 00:40:09.454230 | orchestrator |  "vg_name": "ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4" 2025-09-18 00:40:09.454241 | orchestrator |  }, 2025-09-18 00:40:09.454252 | orchestrator |  { 2025-09-18 00:40:09.454263 | orchestrator |  "lv_name": "osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b", 2025-09-18 00:40:09.454273 | orchestrator |  "vg_name": "ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b" 2025-09-18 00:40:09.454284 | orchestrator |  } 2025-09-18 00:40:09.454295 | orchestrator |  ], 2025-09-18 00:40:09.454305 | orchestrator |  "pv": [ 2025-09-18 00:40:09.454316 | orchestrator |  { 2025-09-18 00:40:09.454327 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 00:40:09.454337 | orchestrator |  "vg_name": "ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b" 2025-09-18 00:40:09.454348 | orchestrator |  }, 2025-09-18 00:40:09.454359 | orchestrator |  { 2025-09-18 00:40:09.454369 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 00:40:09.454460 | orchestrator |  "vg_name": "ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4" 2025-09-18 00:40:09.454473 | orchestrator |  } 2025-09-18 00:40:09.454486 | orchestrator |  ] 2025-09-18 00:40:09.454499 | orchestrator |  } 2025-09-18 00:40:09.454512 | orchestrator | } 2025-09-18 00:40:09.454525 | orchestrator | 2025-09-18 00:40:09.454536 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 00:40:09.454547 | orchestrator | 2025-09-18 00:40:09.454558 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 00:40:09.454569 | orchestrator | Thursday 18 September 2025 00:40:04 +0000 (0:00:00.575) 0:00:50.236 **** 2025-09-18 00:40:09.454580 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 00:40:09.454591 | orchestrator | 2025-09-18 00:40:09.454615 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 00:40:09.454626 | orchestrator | Thursday 18 September 2025 00:40:04 +0000 (0:00:00.277) 0:00:50.513 **** 2025-09-18 00:40:09.454637 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:09.454649 | orchestrator | 2025-09-18 00:40:09.454660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.454671 | orchestrator | Thursday 18 September 2025 00:40:04 +0000 (0:00:00.249) 0:00:50.763 **** 2025-09-18 00:40:09.454682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-18 00:40:09.454692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-18 00:40:09.454703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-18 00:40:09.454714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-18 00:40:09.454725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-18 00:40:09.454735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-18 00:40:09.454746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-18 00:40:09.454757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-18 00:40:09.454768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-18 00:40:09.454779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-18 00:40:09.454789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-18 00:40:09.454809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-18 00:40:09.454820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-18 00:40:09.454831 | orchestrator | 2025-09-18 00:40:09.454842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.454852 | orchestrator | Thursday 18 September 2025 00:40:05 +0000 (0:00:00.429) 0:00:51.193 **** 2025-09-18 00:40:09.454863 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.454877 | orchestrator | 2025-09-18 00:40:09.454888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.454899 | orchestrator | Thursday 18 September 2025 00:40:05 +0000 (0:00:00.224) 0:00:51.417 **** 2025-09-18 00:40:09.454910 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.454920 | orchestrator | 2025-09-18 00:40:09.454931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.454961 | orchestrator | Thursday 18 September 2025 00:40:05 +0000 (0:00:00.205) 0:00:51.623 **** 2025-09-18 00:40:09.454972 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.454983 | orchestrator | 2025-09-18 00:40:09.454994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455005 | orchestrator | Thursday 18 September 2025 00:40:05 +0000 (0:00:00.211) 0:00:51.834 **** 2025-09-18 00:40:09.455015 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.455026 | orchestrator | 2025-09-18 00:40:09.455037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455047 | orchestrator | Thursday 18 September 2025 00:40:05 +0000 (0:00:00.199) 0:00:52.034 **** 2025-09-18 00:40:09.455058 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.455069 | orchestrator | 2025-09-18 00:40:09.455079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455090 | orchestrator | Thursday 18 September 2025 00:40:06 +0000 (0:00:00.202) 0:00:52.236 **** 2025-09-18 00:40:09.455100 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.455111 | orchestrator | 2025-09-18 00:40:09.455122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455133 | orchestrator | Thursday 18 September 2025 00:40:06 +0000 (0:00:00.458) 0:00:52.695 **** 2025-09-18 00:40:09.455143 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.455154 | orchestrator | 2025-09-18 00:40:09.455165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455175 | orchestrator | Thursday 18 September 2025 00:40:06 +0000 (0:00:00.203) 0:00:52.899 **** 2025-09-18 00:40:09.455186 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:09.455197 | orchestrator | 2025-09-18 00:40:09.455208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455218 | orchestrator | Thursday 18 September 2025 00:40:07 +0000 (0:00:00.197) 0:00:53.097 **** 2025-09-18 00:40:09.455229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e) 2025-09-18 00:40:09.455240 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e) 2025-09-18 00:40:09.455251 | orchestrator | 2025-09-18 00:40:09.455262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455273 | orchestrator | Thursday 18 September 2025 00:40:07 +0000 (0:00:00.397) 0:00:53.494 **** 2025-09-18 00:40:09.455283 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739) 2025-09-18 00:40:09.455294 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739) 2025-09-18 00:40:09.455305 | orchestrator | 2025-09-18 00:40:09.455315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455329 | orchestrator | Thursday 18 September 2025 00:40:07 +0000 (0:00:00.402) 0:00:53.897 **** 2025-09-18 00:40:09.455362 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd) 2025-09-18 00:40:09.455392 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd) 2025-09-18 00:40:09.455404 | orchestrator | 2025-09-18 00:40:09.455414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455425 | orchestrator | Thursday 18 September 2025 00:40:08 +0000 (0:00:00.382) 0:00:54.280 **** 2025-09-18 00:40:09.455436 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c) 2025-09-18 00:40:09.455446 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c) 2025-09-18 00:40:09.455457 | orchestrator | 2025-09-18 00:40:09.455468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 00:40:09.455479 | orchestrator | Thursday 18 September 2025 00:40:08 +0000 (0:00:00.417) 0:00:54.697 **** 2025-09-18 00:40:09.455489 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 00:40:09.455500 | orchestrator | 2025-09-18 00:40:09.455511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:09.455521 | orchestrator | Thursday 18 September 2025 00:40:08 +0000 (0:00:00.343) 0:00:55.041 **** 2025-09-18 00:40:09.455532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-18 00:40:09.455542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-18 00:40:09.455553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-18 00:40:09.455564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-18 00:40:09.455574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-18 00:40:09.455585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-18 00:40:09.455595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-18 00:40:09.455606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-18 00:40:09.455617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-18 00:40:09.455627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-18 00:40:09.455638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-18 00:40:09.455656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-18 00:40:18.460523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-18 00:40:18.460636 | orchestrator | 2025-09-18 00:40:18.460654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460668 | orchestrator | Thursday 18 September 2025 00:40:09 +0000 (0:00:00.446) 0:00:55.487 **** 2025-09-18 00:40:18.460679 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460691 | orchestrator | 2025-09-18 00:40:18.460702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460713 | orchestrator | Thursday 18 September 2025 00:40:09 +0000 (0:00:00.195) 0:00:55.683 **** 2025-09-18 00:40:18.460724 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460735 | orchestrator | 2025-09-18 00:40:18.460746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460757 | orchestrator | Thursday 18 September 2025 00:40:09 +0000 (0:00:00.189) 0:00:55.872 **** 2025-09-18 00:40:18.460767 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460778 | orchestrator | 2025-09-18 00:40:18.460789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460820 | orchestrator | Thursday 18 September 2025 00:40:10 +0000 (0:00:00.523) 0:00:56.395 **** 2025-09-18 00:40:18.460831 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460841 | orchestrator | 2025-09-18 00:40:18.460852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460863 | orchestrator | Thursday 18 September 2025 00:40:10 +0000 (0:00:00.176) 0:00:56.572 **** 2025-09-18 00:40:18.460873 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460884 | orchestrator | 2025-09-18 00:40:18.460895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460905 | orchestrator | Thursday 18 September 2025 00:40:10 +0000 (0:00:00.180) 0:00:56.753 **** 2025-09-18 00:40:18.460916 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460926 | orchestrator | 2025-09-18 00:40:18.460937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460948 | orchestrator | Thursday 18 September 2025 00:40:10 +0000 (0:00:00.189) 0:00:56.943 **** 2025-09-18 00:40:18.460958 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.460969 | orchestrator | 2025-09-18 00:40:18.460980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.460990 | orchestrator | Thursday 18 September 2025 00:40:11 +0000 (0:00:00.206) 0:00:57.149 **** 2025-09-18 00:40:18.461001 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461012 | orchestrator | 2025-09-18 00:40:18.461022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.461033 | orchestrator | Thursday 18 September 2025 00:40:11 +0000 (0:00:00.195) 0:00:57.345 **** 2025-09-18 00:40:18.461044 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-18 00:40:18.461055 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-18 00:40:18.461067 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-18 00:40:18.461078 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-18 00:40:18.461088 | orchestrator | 2025-09-18 00:40:18.461099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.461109 | orchestrator | Thursday 18 September 2025 00:40:11 +0000 (0:00:00.671) 0:00:58.017 **** 2025-09-18 00:40:18.461120 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461130 | orchestrator | 2025-09-18 00:40:18.461141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.461152 | orchestrator | Thursday 18 September 2025 00:40:12 +0000 (0:00:00.211) 0:00:58.228 **** 2025-09-18 00:40:18.461162 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461173 | orchestrator | 2025-09-18 00:40:18.461184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.461195 | orchestrator | Thursday 18 September 2025 00:40:12 +0000 (0:00:00.205) 0:00:58.433 **** 2025-09-18 00:40:18.461206 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461216 | orchestrator | 2025-09-18 00:40:18.461227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 00:40:18.461237 | orchestrator | Thursday 18 September 2025 00:40:12 +0000 (0:00:00.199) 0:00:58.632 **** 2025-09-18 00:40:18.461248 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461259 | orchestrator | 2025-09-18 00:40:18.461269 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 00:40:18.461279 | orchestrator | Thursday 18 September 2025 00:40:12 +0000 (0:00:00.198) 0:00:58.830 **** 2025-09-18 00:40:18.461290 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461301 | orchestrator | 2025-09-18 00:40:18.461311 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 00:40:18.461322 | orchestrator | Thursday 18 September 2025 00:40:13 +0000 (0:00:00.262) 0:00:59.093 **** 2025-09-18 00:40:18.461332 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07829316-95ed-5d0c-8777-c74850e385f5'}}) 2025-09-18 00:40:18.461344 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '48f1b2b0-1ebe-571e-b515-4e988bd235b0'}}) 2025-09-18 00:40:18.461363 | orchestrator | 2025-09-18 00:40:18.461374 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 00:40:18.461406 | orchestrator | Thursday 18 September 2025 00:40:13 +0000 (0:00:00.200) 0:00:59.293 **** 2025-09-18 00:40:18.461419 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'}) 2025-09-18 00:40:18.461431 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'}) 2025-09-18 00:40:18.461442 | orchestrator | 2025-09-18 00:40:18.461453 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 00:40:18.461481 | orchestrator | Thursday 18 September 2025 00:40:15 +0000 (0:00:01.988) 0:01:01.282 **** 2025-09-18 00:40:18.461493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:18.461505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:18.461516 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461527 | orchestrator | 2025-09-18 00:40:18.461538 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 00:40:18.461548 | orchestrator | Thursday 18 September 2025 00:40:15 +0000 (0:00:00.173) 0:01:01.456 **** 2025-09-18 00:40:18.461559 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'}) 2025-09-18 00:40:18.461587 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'}) 2025-09-18 00:40:18.461599 | orchestrator | 2025-09-18 00:40:18.461610 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 00:40:18.461621 | orchestrator | Thursday 18 September 2025 00:40:16 +0000 (0:00:01.365) 0:01:02.821 **** 2025-09-18 00:40:18.461631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:18.461642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:18.461653 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461663 | orchestrator | 2025-09-18 00:40:18.461674 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 00:40:18.461685 | orchestrator | Thursday 18 September 2025 00:40:16 +0000 (0:00:00.182) 0:01:03.004 **** 2025-09-18 00:40:18.461695 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461706 | orchestrator | 2025-09-18 00:40:18.461716 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 00:40:18.461727 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.140) 0:01:03.145 **** 2025-09-18 00:40:18.461738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:18.461754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:18.461765 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461776 | orchestrator | 2025-09-18 00:40:18.461786 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 00:40:18.461797 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.170) 0:01:03.315 **** 2025-09-18 00:40:18.461808 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461829 | orchestrator | 2025-09-18 00:40:18.461840 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 00:40:18.461851 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.153) 0:01:03.469 **** 2025-09-18 00:40:18.461861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:18.461872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:18.461883 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461894 | orchestrator | 2025-09-18 00:40:18.461904 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 00:40:18.461915 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.161) 0:01:03.630 **** 2025-09-18 00:40:18.461925 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.461936 | orchestrator | 2025-09-18 00:40:18.461947 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 00:40:18.461957 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.145) 0:01:03.776 **** 2025-09-18 00:40:18.461968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:18.461979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:18.461990 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:18.462000 | orchestrator | 2025-09-18 00:40:18.462011 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 00:40:18.462075 | orchestrator | Thursday 18 September 2025 00:40:17 +0000 (0:00:00.155) 0:01:03.931 **** 2025-09-18 00:40:18.462087 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:18.462098 | orchestrator | 2025-09-18 00:40:18.462109 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 00:40:18.462129 | orchestrator | Thursday 18 September 2025 00:40:18 +0000 (0:00:00.407) 0:01:04.339 **** 2025-09-18 00:40:18.462148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:24.736129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:24.736259 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736278 | orchestrator | 2025-09-18 00:40:24.736294 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 00:40:24.736317 | orchestrator | Thursday 18 September 2025 00:40:18 +0000 (0:00:00.162) 0:01:04.501 **** 2025-09-18 00:40:24.736330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:24.736343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:24.736354 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736366 | orchestrator | 2025-09-18 00:40:24.736377 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 00:40:24.736413 | orchestrator | Thursday 18 September 2025 00:40:18 +0000 (0:00:00.158) 0:01:04.660 **** 2025-09-18 00:40:24.736425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:24.736437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:24.736448 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736484 | orchestrator | 2025-09-18 00:40:24.736496 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 00:40:24.736508 | orchestrator | Thursday 18 September 2025 00:40:18 +0000 (0:00:00.152) 0:01:04.812 **** 2025-09-18 00:40:24.736519 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736530 | orchestrator | 2025-09-18 00:40:24.736541 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 00:40:24.736552 | orchestrator | Thursday 18 September 2025 00:40:18 +0000 (0:00:00.144) 0:01:04.956 **** 2025-09-18 00:40:24.736562 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736573 | orchestrator | 2025-09-18 00:40:24.736584 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 00:40:24.736595 | orchestrator | Thursday 18 September 2025 00:40:19 +0000 (0:00:00.149) 0:01:05.106 **** 2025-09-18 00:40:24.736606 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.736616 | orchestrator | 2025-09-18 00:40:24.736628 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 00:40:24.736653 | orchestrator | Thursday 18 September 2025 00:40:19 +0000 (0:00:00.144) 0:01:05.251 **** 2025-09-18 00:40:24.736664 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:40:24.736676 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 00:40:24.736687 | orchestrator | } 2025-09-18 00:40:24.736698 | orchestrator | 2025-09-18 00:40:24.736709 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 00:40:24.736720 | orchestrator | Thursday 18 September 2025 00:40:19 +0000 (0:00:00.151) 0:01:05.402 **** 2025-09-18 00:40:24.736731 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:40:24.736742 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 00:40:24.736753 | orchestrator | } 2025-09-18 00:40:24.736764 | orchestrator | 2025-09-18 00:40:24.736775 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 00:40:24.736786 | orchestrator | Thursday 18 September 2025 00:40:19 +0000 (0:00:00.145) 0:01:05.548 **** 2025-09-18 00:40:24.736797 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:40:24.736808 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 00:40:24.736819 | orchestrator | } 2025-09-18 00:40:24.736830 | orchestrator | 2025-09-18 00:40:24.736841 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 00:40:24.736852 | orchestrator | Thursday 18 September 2025 00:40:19 +0000 (0:00:00.156) 0:01:05.704 **** 2025-09-18 00:40:24.736863 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:24.736874 | orchestrator | 2025-09-18 00:40:24.736885 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 00:40:24.736896 | orchestrator | Thursday 18 September 2025 00:40:20 +0000 (0:00:00.548) 0:01:06.252 **** 2025-09-18 00:40:24.736907 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:24.736917 | orchestrator | 2025-09-18 00:40:24.736928 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 00:40:24.736939 | orchestrator | Thursday 18 September 2025 00:40:20 +0000 (0:00:00.521) 0:01:06.773 **** 2025-09-18 00:40:24.736950 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:24.736961 | orchestrator | 2025-09-18 00:40:24.736971 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 00:40:24.736982 | orchestrator | Thursday 18 September 2025 00:40:21 +0000 (0:00:00.727) 0:01:07.501 **** 2025-09-18 00:40:24.736993 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:24.737003 | orchestrator | 2025-09-18 00:40:24.737014 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 00:40:24.737025 | orchestrator | Thursday 18 September 2025 00:40:21 +0000 (0:00:00.148) 0:01:07.649 **** 2025-09-18 00:40:24.737036 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737047 | orchestrator | 2025-09-18 00:40:24.737058 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 00:40:24.737069 | orchestrator | Thursday 18 September 2025 00:40:21 +0000 (0:00:00.130) 0:01:07.779 **** 2025-09-18 00:40:24.737087 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737098 | orchestrator | 2025-09-18 00:40:24.737109 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 00:40:24.737120 | orchestrator | Thursday 18 September 2025 00:40:21 +0000 (0:00:00.114) 0:01:07.894 **** 2025-09-18 00:40:24.737131 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:40:24.737142 | orchestrator |  "vgs_report": { 2025-09-18 00:40:24.737155 | orchestrator |  "vg": [] 2025-09-18 00:40:24.737185 | orchestrator |  } 2025-09-18 00:40:24.737198 | orchestrator | } 2025-09-18 00:40:24.737209 | orchestrator | 2025-09-18 00:40:24.737221 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 00:40:24.737232 | orchestrator | Thursday 18 September 2025 00:40:21 +0000 (0:00:00.152) 0:01:08.046 **** 2025-09-18 00:40:24.737243 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737254 | orchestrator | 2025-09-18 00:40:24.737266 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 00:40:24.737278 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.136) 0:01:08.182 **** 2025-09-18 00:40:24.737289 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737300 | orchestrator | 2025-09-18 00:40:24.737312 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 00:40:24.737323 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.143) 0:01:08.326 **** 2025-09-18 00:40:24.737334 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737345 | orchestrator | 2025-09-18 00:40:24.737357 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 00:40:24.737368 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.144) 0:01:08.471 **** 2025-09-18 00:40:24.737380 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737425 | orchestrator | 2025-09-18 00:40:24.737436 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 00:40:24.737447 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.141) 0:01:08.613 **** 2025-09-18 00:40:24.737458 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737468 | orchestrator | 2025-09-18 00:40:24.737479 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 00:40:24.737490 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.140) 0:01:08.753 **** 2025-09-18 00:40:24.737501 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737511 | orchestrator | 2025-09-18 00:40:24.737522 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 00:40:24.737533 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.147) 0:01:08.901 **** 2025-09-18 00:40:24.737543 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737554 | orchestrator | 2025-09-18 00:40:24.737565 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 00:40:24.737575 | orchestrator | Thursday 18 September 2025 00:40:22 +0000 (0:00:00.134) 0:01:09.036 **** 2025-09-18 00:40:24.737586 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737597 | orchestrator | 2025-09-18 00:40:24.737607 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 00:40:24.737618 | orchestrator | Thursday 18 September 2025 00:40:23 +0000 (0:00:00.127) 0:01:09.163 **** 2025-09-18 00:40:24.737629 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737639 | orchestrator | 2025-09-18 00:40:24.737650 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 00:40:24.737667 | orchestrator | Thursday 18 September 2025 00:40:23 +0000 (0:00:00.394) 0:01:09.557 **** 2025-09-18 00:40:24.737678 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737689 | orchestrator | 2025-09-18 00:40:24.737699 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 00:40:24.737710 | orchestrator | Thursday 18 September 2025 00:40:23 +0000 (0:00:00.148) 0:01:09.706 **** 2025-09-18 00:40:24.737721 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737739 | orchestrator | 2025-09-18 00:40:24.737750 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 00:40:24.737761 | orchestrator | Thursday 18 September 2025 00:40:23 +0000 (0:00:00.138) 0:01:09.845 **** 2025-09-18 00:40:24.737772 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737782 | orchestrator | 2025-09-18 00:40:24.737793 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 00:40:24.737804 | orchestrator | Thursday 18 September 2025 00:40:23 +0000 (0:00:00.153) 0:01:09.998 **** 2025-09-18 00:40:24.737815 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737826 | orchestrator | 2025-09-18 00:40:24.737836 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 00:40:24.737847 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.154) 0:01:10.153 **** 2025-09-18 00:40:24.737858 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737868 | orchestrator | 2025-09-18 00:40:24.737879 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 00:40:24.737890 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.137) 0:01:10.290 **** 2025-09-18 00:40:24.737901 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:24.737912 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:24.737922 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737933 | orchestrator | 2025-09-18 00:40:24.737944 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 00:40:24.737954 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.157) 0:01:10.448 **** 2025-09-18 00:40:24.737965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:24.737976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:24.737987 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:24.737997 | orchestrator | 2025-09-18 00:40:24.738008 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 00:40:24.738057 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.158) 0:01:10.607 **** 2025-09-18 00:40:24.738078 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.855538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.855639 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.855654 | orchestrator | 2025-09-18 00:40:27.855667 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 00:40:27.855680 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.170) 0:01:10.778 **** 2025-09-18 00:40:27.855691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.855703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.855714 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.855725 | orchestrator | 2025-09-18 00:40:27.855736 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 00:40:27.855760 | orchestrator | Thursday 18 September 2025 00:40:24 +0000 (0:00:00.157) 0:01:10.936 **** 2025-09-18 00:40:27.855771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.855806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.855818 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.855828 | orchestrator | 2025-09-18 00:40:27.855839 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 00:40:27.855850 | orchestrator | Thursday 18 September 2025 00:40:25 +0000 (0:00:00.166) 0:01:11.103 **** 2025-09-18 00:40:27.855861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.855871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.855882 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.855893 | orchestrator | 2025-09-18 00:40:27.855904 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 00:40:27.855915 | orchestrator | Thursday 18 September 2025 00:40:25 +0000 (0:00:00.166) 0:01:11.269 **** 2025-09-18 00:40:27.855926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.855937 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.855947 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.855958 | orchestrator | 2025-09-18 00:40:27.855969 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 00:40:27.855980 | orchestrator | Thursday 18 September 2025 00:40:25 +0000 (0:00:00.345) 0:01:11.615 **** 2025-09-18 00:40:27.855991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.856002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.856013 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.856023 | orchestrator | 2025-09-18 00:40:27.856034 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 00:40:27.856044 | orchestrator | Thursday 18 September 2025 00:40:25 +0000 (0:00:00.173) 0:01:11.788 **** 2025-09-18 00:40:27.856055 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:27.856067 | orchestrator | 2025-09-18 00:40:27.856077 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 00:40:27.856088 | orchestrator | Thursday 18 September 2025 00:40:26 +0000 (0:00:00.548) 0:01:12.337 **** 2025-09-18 00:40:27.856099 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:27.856109 | orchestrator | 2025-09-18 00:40:27.856120 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 00:40:27.856131 | orchestrator | Thursday 18 September 2025 00:40:26 +0000 (0:00:00.544) 0:01:12.881 **** 2025-09-18 00:40:27.856141 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:27.856152 | orchestrator | 2025-09-18 00:40:27.856163 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 00:40:27.856174 | orchestrator | Thursday 18 September 2025 00:40:26 +0000 (0:00:00.156) 0:01:13.038 **** 2025-09-18 00:40:27.856184 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'vg_name': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'}) 2025-09-18 00:40:27.856196 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'vg_name': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'}) 2025-09-18 00:40:27.856207 | orchestrator | 2025-09-18 00:40:27.856218 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 00:40:27.856237 | orchestrator | Thursday 18 September 2025 00:40:27 +0000 (0:00:00.173) 0:01:13.212 **** 2025-09-18 00:40:27.856264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.856276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.856287 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.856298 | orchestrator | 2025-09-18 00:40:27.856308 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 00:40:27.856319 | orchestrator | Thursday 18 September 2025 00:40:27 +0000 (0:00:00.164) 0:01:13.377 **** 2025-09-18 00:40:27.856330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.856341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.856352 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.856363 | orchestrator | 2025-09-18 00:40:27.856374 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 00:40:27.856385 | orchestrator | Thursday 18 September 2025 00:40:27 +0000 (0:00:00.163) 0:01:13.541 **** 2025-09-18 00:40:27.856426 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'})  2025-09-18 00:40:27.856455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'})  2025-09-18 00:40:27.856467 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:27.856478 | orchestrator | 2025-09-18 00:40:27.856488 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 00:40:27.856499 | orchestrator | Thursday 18 September 2025 00:40:27 +0000 (0:00:00.173) 0:01:13.714 **** 2025-09-18 00:40:27.856510 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 00:40:27.856521 | orchestrator |  "lvm_report": { 2025-09-18 00:40:27.856533 | orchestrator |  "lv": [ 2025-09-18 00:40:27.856544 | orchestrator |  { 2025-09-18 00:40:27.856555 | orchestrator |  "lv_name": "osd-block-07829316-95ed-5d0c-8777-c74850e385f5", 2025-09-18 00:40:27.856572 | orchestrator |  "vg_name": "ceph-07829316-95ed-5d0c-8777-c74850e385f5" 2025-09-18 00:40:27.856583 | orchestrator |  }, 2025-09-18 00:40:27.856594 | orchestrator |  { 2025-09-18 00:40:27.856605 | orchestrator |  "lv_name": "osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0", 2025-09-18 00:40:27.856616 | orchestrator |  "vg_name": "ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0" 2025-09-18 00:40:27.856627 | orchestrator |  } 2025-09-18 00:40:27.856637 | orchestrator |  ], 2025-09-18 00:40:27.856648 | orchestrator |  "pv": [ 2025-09-18 00:40:27.856659 | orchestrator |  { 2025-09-18 00:40:27.856670 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 00:40:27.856681 | orchestrator |  "vg_name": "ceph-07829316-95ed-5d0c-8777-c74850e385f5" 2025-09-18 00:40:27.856691 | orchestrator |  }, 2025-09-18 00:40:27.856702 | orchestrator |  { 2025-09-18 00:40:27.856713 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 00:40:27.856724 | orchestrator |  "vg_name": "ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0" 2025-09-18 00:40:27.856734 | orchestrator |  } 2025-09-18 00:40:27.856745 | orchestrator |  ] 2025-09-18 00:40:27.856756 | orchestrator |  } 2025-09-18 00:40:27.856767 | orchestrator | } 2025-09-18 00:40:27.856778 | orchestrator | 2025-09-18 00:40:27.856789 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:40:27.856808 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 00:40:27.856819 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 00:40:27.856830 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 00:40:27.856841 | orchestrator | 2025-09-18 00:40:27.856851 | orchestrator | 2025-09-18 00:40:27.856862 | orchestrator | 2025-09-18 00:40:27.856873 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:40:27.856884 | orchestrator | Thursday 18 September 2025 00:40:27 +0000 (0:00:00.153) 0:01:13.867 **** 2025-09-18 00:40:27.856895 | orchestrator | =============================================================================== 2025-09-18 00:40:27.856905 | orchestrator | Create block VGs -------------------------------------------------------- 5.64s 2025-09-18 00:40:27.856916 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2025-09-18 00:40:27.856927 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-09-18 00:40:27.856938 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.81s 2025-09-18 00:40:27.856949 | orchestrator | Add known partitions to the list of available block devices ------------- 1.59s 2025-09-18 00:40:27.856959 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2025-09-18 00:40:27.856970 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-09-18 00:40:27.856981 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-09-18 00:40:27.856999 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-09-18 00:40:28.289650 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2025-09-18 00:40:28.289752 | orchestrator | Print LVM report data --------------------------------------------------- 1.02s 2025-09-18 00:40:28.289766 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-09-18 00:40:28.289778 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.83s 2025-09-18 00:40:28.289789 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-09-18 00:40:28.289800 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.82s 2025-09-18 00:40:28.289810 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-09-18 00:40:28.289821 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.78s 2025-09-18 00:40:28.289831 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-09-18 00:40:28.289842 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.72s 2025-09-18 00:40:28.289853 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-09-18 00:40:40.640674 | orchestrator | 2025-09-18 00:40:40 | INFO  | Task 1e3d8137-861e-4d21-a84e-c996a11491a0 (facts) was prepared for execution. 2025-09-18 00:40:40.640788 | orchestrator | 2025-09-18 00:40:40 | INFO  | It takes a moment until task 1e3d8137-861e-4d21-a84e-c996a11491a0 (facts) has been started and output is visible here. 2025-09-18 00:40:52.821742 | orchestrator | 2025-09-18 00:40:52.821855 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-18 00:40:52.821871 | orchestrator | 2025-09-18 00:40:52.821883 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 00:40:52.821894 | orchestrator | Thursday 18 September 2025 00:40:44 +0000 (0:00:00.274) 0:00:00.274 **** 2025-09-18 00:40:52.821905 | orchestrator | ok: [testbed-manager] 2025-09-18 00:40:52.821917 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:40:52.821954 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:40:52.821966 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:40:52.821976 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:40:52.821987 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:40:52.821997 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:52.822007 | orchestrator | 2025-09-18 00:40:52.822069 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 00:40:52.822082 | orchestrator | Thursday 18 September 2025 00:40:45 +0000 (0:00:01.091) 0:00:01.366 **** 2025-09-18 00:40:52.822109 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:40:52.822120 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:40:52.822142 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:40:52.822153 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:40:52.822163 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:40:52.822174 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:52.822185 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:52.822196 | orchestrator | 2025-09-18 00:40:52.822207 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 00:40:52.822218 | orchestrator | 2025-09-18 00:40:52.822229 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 00:40:52.822240 | orchestrator | Thursday 18 September 2025 00:40:46 +0000 (0:00:01.201) 0:00:02.568 **** 2025-09-18 00:40:52.822250 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:40:52.822261 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:40:52.822272 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:40:52.822282 | orchestrator | ok: [testbed-manager] 2025-09-18 00:40:52.822293 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:40:52.822304 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:40:52.822314 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:40:52.822325 | orchestrator | 2025-09-18 00:40:52.822336 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 00:40:52.822346 | orchestrator | 2025-09-18 00:40:52.822357 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 00:40:52.822368 | orchestrator | Thursday 18 September 2025 00:40:51 +0000 (0:00:05.013) 0:00:07.581 **** 2025-09-18 00:40:52.822379 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:40:52.822390 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:40:52.822400 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:40:52.822433 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:40:52.822445 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:40:52.822456 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:40:52.822466 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:40:52.822476 | orchestrator | 2025-09-18 00:40:52.822487 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:40:52.822499 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822511 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822522 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822532 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822543 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822553 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822564 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:40:52.822585 | orchestrator | 2025-09-18 00:40:52.822595 | orchestrator | 2025-09-18 00:40:52.822606 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:40:52.822617 | orchestrator | Thursday 18 September 2025 00:40:52 +0000 (0:00:00.543) 0:00:08.124 **** 2025-09-18 00:40:52.822628 | orchestrator | =============================================================================== 2025-09-18 00:40:52.822639 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.01s 2025-09-18 00:40:52.822650 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-09-18 00:40:52.822660 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-09-18 00:40:52.822671 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-18 00:41:05.100557 | orchestrator | 2025-09-18 00:41:05 | INFO  | Task 4c3317bf-a276-4ca8-87d3-8b3a9bf3e7f1 (frr) was prepared for execution. 2025-09-18 00:41:05.100674 | orchestrator | 2025-09-18 00:41:05 | INFO  | It takes a moment until task 4c3317bf-a276-4ca8-87d3-8b3a9bf3e7f1 (frr) has been started and output is visible here. 2025-09-18 00:41:32.593291 | orchestrator | 2025-09-18 00:41:32.593408 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-18 00:41:32.593425 | orchestrator | 2025-09-18 00:41:32.593438 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-18 00:41:32.593450 | orchestrator | Thursday 18 September 2025 00:41:09 +0000 (0:00:00.235) 0:00:00.235 **** 2025-09-18 00:41:32.593508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:41:32.593522 | orchestrator | 2025-09-18 00:41:32.593534 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-18 00:41:32.593545 | orchestrator | Thursday 18 September 2025 00:41:09 +0000 (0:00:00.223) 0:00:00.459 **** 2025-09-18 00:41:32.593556 | orchestrator | changed: [testbed-manager] 2025-09-18 00:41:32.593568 | orchestrator | 2025-09-18 00:41:32.593579 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-18 00:41:32.593591 | orchestrator | Thursday 18 September 2025 00:41:10 +0000 (0:00:01.148) 0:00:01.607 **** 2025-09-18 00:41:32.593602 | orchestrator | changed: [testbed-manager] 2025-09-18 00:41:32.593613 | orchestrator | 2025-09-18 00:41:32.593641 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-18 00:41:32.593653 | orchestrator | Thursday 18 September 2025 00:41:20 +0000 (0:00:09.524) 0:00:11.131 **** 2025-09-18 00:41:32.593664 | orchestrator | ok: [testbed-manager] 2025-09-18 00:41:32.593676 | orchestrator | 2025-09-18 00:41:32.593687 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-18 00:41:32.593698 | orchestrator | Thursday 18 September 2025 00:41:21 +0000 (0:00:01.261) 0:00:12.393 **** 2025-09-18 00:41:32.593708 | orchestrator | changed: [testbed-manager] 2025-09-18 00:41:32.593719 | orchestrator | 2025-09-18 00:41:32.593730 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-18 00:41:32.593741 | orchestrator | Thursday 18 September 2025 00:41:22 +0000 (0:00:00.979) 0:00:13.372 **** 2025-09-18 00:41:32.593752 | orchestrator | ok: [testbed-manager] 2025-09-18 00:41:32.593764 | orchestrator | 2025-09-18 00:41:32.593775 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-18 00:41:32.593786 | orchestrator | Thursday 18 September 2025 00:41:23 +0000 (0:00:01.198) 0:00:14.571 **** 2025-09-18 00:41:32.593797 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:41:32.593808 | orchestrator | 2025-09-18 00:41:32.593820 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-18 00:41:32.593833 | orchestrator | Thursday 18 September 2025 00:41:24 +0000 (0:00:00.827) 0:00:15.398 **** 2025-09-18 00:41:32.593845 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:41:32.593857 | orchestrator | 2025-09-18 00:41:32.593870 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-18 00:41:32.593906 | orchestrator | Thursday 18 September 2025 00:41:24 +0000 (0:00:00.161) 0:00:15.559 **** 2025-09-18 00:41:32.593919 | orchestrator | changed: [testbed-manager] 2025-09-18 00:41:32.593932 | orchestrator | 2025-09-18 00:41:32.593943 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-18 00:41:32.593954 | orchestrator | Thursday 18 September 2025 00:41:25 +0000 (0:00:01.049) 0:00:16.608 **** 2025-09-18 00:41:32.593965 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-18 00:41:32.593976 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-18 00:41:32.593988 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-18 00:41:32.593999 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-18 00:41:32.594010 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-18 00:41:32.594078 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-18 00:41:32.594090 | orchestrator | 2025-09-18 00:41:32.594101 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-18 00:41:32.594112 | orchestrator | Thursday 18 September 2025 00:41:28 +0000 (0:00:03.120) 0:00:19.728 **** 2025-09-18 00:41:32.594122 | orchestrator | ok: [testbed-manager] 2025-09-18 00:41:32.594133 | orchestrator | 2025-09-18 00:41:32.594144 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-18 00:41:32.594155 | orchestrator | Thursday 18 September 2025 00:41:30 +0000 (0:00:01.319) 0:00:21.048 **** 2025-09-18 00:41:32.594166 | orchestrator | changed: [testbed-manager] 2025-09-18 00:41:32.594177 | orchestrator | 2025-09-18 00:41:32.594188 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:41:32.594199 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:41:32.594210 | orchestrator | 2025-09-18 00:41:32.594221 | orchestrator | 2025-09-18 00:41:32.594232 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:41:32.594243 | orchestrator | Thursday 18 September 2025 00:41:32 +0000 (0:00:02.309) 0:00:23.358 **** 2025-09-18 00:41:32.594253 | orchestrator | =============================================================================== 2025-09-18 00:41:32.594265 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.52s 2025-09-18 00:41:32.594275 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.12s 2025-09-18 00:41:32.594286 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 2.31s 2025-09-18 00:41:32.594297 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.32s 2025-09-18 00:41:32.594326 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.26s 2025-09-18 00:41:32.594337 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2025-09-18 00:41:32.594348 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.15s 2025-09-18 00:41:32.594359 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.05s 2025-09-18 00:41:32.594369 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-09-18 00:41:32.594380 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.83s 2025-09-18 00:41:32.594391 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-18 00:41:32.594402 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-18 00:41:32.873278 | orchestrator | 2025-09-18 00:41:32.876076 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Sep 18 00:41:32 UTC 2025 2025-09-18 00:41:32.876132 | orchestrator | 2025-09-18 00:41:34.713754 | orchestrator | 2025-09-18 00:41:34 | INFO  | Collection nutshell is prepared for execution 2025-09-18 00:41:34.713859 | orchestrator | 2025-09-18 00:41:34 | INFO  | D [0] - dotfiles 2025-09-18 00:41:44.786644 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [0] - homer 2025-09-18 00:41:44.786760 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [0] - netdata 2025-09-18 00:41:44.786777 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [0] - openstackclient 2025-09-18 00:41:44.786789 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [0] - phpmyadmin 2025-09-18 00:41:44.786815 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [0] - common 2025-09-18 00:41:44.791179 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [1] -- loadbalancer 2025-09-18 00:41:44.791205 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [2] --- opensearch 2025-09-18 00:41:44.791618 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [2] --- mariadb-ng 2025-09-18 00:41:44.791854 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [3] ---- horizon 2025-09-18 00:41:44.792072 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [3] ---- keystone 2025-09-18 00:41:44.792403 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [4] ----- neutron 2025-09-18 00:41:44.792736 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ wait-for-nova 2025-09-18 00:41:44.792889 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [5] ------ octavia 2025-09-18 00:41:44.794887 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- barbican 2025-09-18 00:41:44.794911 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- designate 2025-09-18 00:41:44.794922 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- ironic 2025-09-18 00:41:44.794933 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- placement 2025-09-18 00:41:44.795402 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- magnum 2025-09-18 00:41:44.796353 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [1] -- openvswitch 2025-09-18 00:41:44.796627 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [2] --- ovn 2025-09-18 00:41:44.796787 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [1] -- memcached 2025-09-18 00:41:44.797319 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [1] -- redis 2025-09-18 00:41:44.797365 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [1] -- rabbitmq-ng 2025-09-18 00:41:44.798236 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [0] - kubernetes 2025-09-18 00:41:44.800373 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [1] -- kubeconfig 2025-09-18 00:41:44.800397 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [1] -- copy-kubeconfig 2025-09-18 00:41:44.800981 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [0] - ceph 2025-09-18 00:41:44.803200 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [1] -- ceph-pools 2025-09-18 00:41:44.803294 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [2] --- copy-ceph-keys 2025-09-18 00:41:44.803585 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [3] ---- cephclient 2025-09-18 00:41:44.804025 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-18 00:41:44.804045 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [4] ----- wait-for-keystone 2025-09-18 00:41:44.804674 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-18 00:41:44.804693 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ glance 2025-09-18 00:41:44.804705 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ cinder 2025-09-18 00:41:44.804790 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ nova 2025-09-18 00:41:44.804838 | orchestrator | 2025-09-18 00:41:44 | INFO  | A [4] ----- prometheus 2025-09-18 00:41:44.805198 | orchestrator | 2025-09-18 00:41:44 | INFO  | D [5] ------ grafana 2025-09-18 00:41:45.026073 | orchestrator | 2025-09-18 00:41:45 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-18 00:41:45.026165 | orchestrator | 2025-09-18 00:41:45 | INFO  | Tasks are running in the background 2025-09-18 00:41:47.770159 | orchestrator | 2025-09-18 00:41:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-18 00:41:49.861322 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:41:49.861546 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:41:49.862134 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:41:49.863990 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:41:49.864438 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:41:49.865026 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:41:49.865711 | orchestrator | 2025-09-18 00:41:49 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:41:49.865737 | orchestrator | 2025-09-18 00:41:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:41:52.928808 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:41:52.928997 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:41:52.929660 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:41:52.930321 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:41:52.930832 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:41:52.931538 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:41:52.932249 | orchestrator | 2025-09-18 00:41:52 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:41:52.932270 | orchestrator | 2025-09-18 00:41:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:41:55.965646 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:41:55.972258 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:41:55.976092 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:41:55.976668 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:41:55.977339 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:41:55.977991 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:41:55.979160 | orchestrator | 2025-09-18 00:41:55 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:41:55.979184 | orchestrator | 2025-09-18 00:41:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:41:59.142940 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:41:59.143066 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:41:59.143085 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:41:59.143098 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:41:59.143109 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:41:59.143120 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:41:59.143131 | orchestrator | 2025-09-18 00:41:59 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:41:59.143142 | orchestrator | 2025-09-18 00:41:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:02.217976 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:02.218153 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:42:02.218174 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:02.218186 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:02.218198 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:02.218209 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:02.218219 | orchestrator | 2025-09-18 00:42:02 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:02.218238 | orchestrator | 2025-09-18 00:42:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:05.301884 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:05.301994 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:42:05.302011 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:05.302078 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:05.302090 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:05.302102 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:05.302114 | orchestrator | 2025-09-18 00:42:05 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:05.302126 | orchestrator | 2025-09-18 00:42:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:08.267965 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:08.268083 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state STARTED 2025-09-18 00:42:08.268098 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:08.268110 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:08.268144 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:08.268156 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:08.268166 | orchestrator | 2025-09-18 00:42:08 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:08.268177 | orchestrator | 2025-09-18 00:42:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:11.429355 | orchestrator | 2025-09-18 00:42:11.429447 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-18 00:42:11.429462 | orchestrator | 2025-09-18 00:42:11.429475 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-18 00:42:11.429486 | orchestrator | Thursday 18 September 2025 00:41:57 +0000 (0:00:01.296) 0:00:01.296 **** 2025-09-18 00:42:11.429550 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:42:11.429564 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:42:11.429575 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:42:11.429586 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:42:11.429597 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:42:11.429607 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:42:11.429618 | orchestrator | changed: [testbed-manager] 2025-09-18 00:42:11.429628 | orchestrator | 2025-09-18 00:42:11.429639 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-18 00:42:11.429651 | orchestrator | Thursday 18 September 2025 00:42:00 +0000 (0:00:03.677) 0:00:04.974 **** 2025-09-18 00:42:11.429662 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 00:42:11.429673 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-18 00:42:11.429683 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 00:42:11.429699 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 00:42:11.429716 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 00:42:11.429727 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 00:42:11.429738 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 00:42:11.429749 | orchestrator | 2025-09-18 00:42:11.429760 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-18 00:42:11.429771 | orchestrator | Thursday 18 September 2025 00:42:02 +0000 (0:00:01.669) 0:00:06.643 **** 2025-09-18 00:42:11.429786 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:01.377718', 'end': '2025-09-18 00:42:01.385935', 'delta': '0:00:00.008217', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429815 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:01.597375', 'end': '2025-09-18 00:42:01.604813', 'delta': '0:00:00.007438', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429845 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:01.341191', 'end': '2025-09-18 00:42:01.345474', 'delta': '0:00:00.004283', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429876 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:01.732924', 'end': '2025-09-18 00:42:01.741405', 'delta': '0:00:00.008481', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429889 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:01.928820', 'end': '2025-09-18 00:42:01.936928', 'delta': '0:00:00.008108', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429902 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:02.212896', 'end': '2025-09-18 00:42:02.220585', 'delta': '0:00:00.007689', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429921 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 00:42:02.460605', 'end': '2025-09-18 00:42:02.469841', 'delta': '0:00:00.009236', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 00:42:11.429949 | orchestrator | 2025-09-18 00:42:11.429963 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-18 00:42:11.429975 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:02.134) 0:00:08.777 **** 2025-09-18 00:42:11.429989 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 00:42:11.430002 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 00:42:11.430064 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 00:42:11.430076 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 00:42:11.430087 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 00:42:11.430098 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 00:42:11.430109 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-18 00:42:11.430119 | orchestrator | 2025-09-18 00:42:11.430131 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-18 00:42:11.430142 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:01.371) 0:00:10.148 **** 2025-09-18 00:42:11.430152 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-18 00:42:11.430163 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 00:42:11.430175 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 00:42:11.430195 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 00:42:11.430215 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 00:42:11.430240 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 00:42:11.430265 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 00:42:11.430284 | orchestrator | 2025-09-18 00:42:11.430303 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:42:11.430334 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430356 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430376 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430395 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430413 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430432 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430451 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:42:11.430470 | orchestrator | 2025-09-18 00:42:11.430488 | orchestrator | 2025-09-18 00:42:11.430531 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:42:11.430549 | orchestrator | Thursday 18 September 2025 00:42:08 +0000 (0:00:02.601) 0:00:12.750 **** 2025-09-18 00:42:11.430567 | orchestrator | =============================================================================== 2025-09-18 00:42:11.430584 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.68s 2025-09-18 00:42:11.430603 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.60s 2025-09-18 00:42:11.430636 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.13s 2025-09-18 00:42:11.430651 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.67s 2025-09-18 00:42:11.430670 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.37s 2025-09-18 00:42:11.432036 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:11.432066 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task e2388163-9b1e-4ce7-a3aa-7dce447a4b48 is in state SUCCESS 2025-09-18 00:42:11.432084 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:11.432099 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:11.432110 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:11.432130 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:11.432141 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:11.432152 | orchestrator | 2025-09-18 00:42:11 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:11.432163 | orchestrator | 2025-09-18 00:42:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:14.577860 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:14.577922 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:14.581206 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:14.581220 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:14.581226 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:14.581232 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:14.582808 | orchestrator | 2025-09-18 00:42:14 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:14.582932 | orchestrator | 2025-09-18 00:42:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:17.657643 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:17.657892 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:17.658340 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:17.662061 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:17.662568 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:17.663120 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:17.663643 | orchestrator | 2025-09-18 00:42:17 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:17.663671 | orchestrator | 2025-09-18 00:42:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:20.742256 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:20.742351 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:20.742673 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:20.743288 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:20.744141 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:20.744969 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:20.745745 | orchestrator | 2025-09-18 00:42:20 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:20.745859 | orchestrator | 2025-09-18 00:42:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:23.827835 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:23.829979 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:23.831106 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:23.832040 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:23.833078 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:23.834491 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:23.835318 | orchestrator | 2025-09-18 00:42:23 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:23.835435 | orchestrator | 2025-09-18 00:42:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:26.887632 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:26.888906 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:26.891527 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:26.895192 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:26.896268 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:26.897101 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:26.897959 | orchestrator | 2025-09-18 00:42:26 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:26.898422 | orchestrator | 2025-09-18 00:42:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:29.942901 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:29.943288 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:29.944204 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:29.945670 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:29.947436 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:29.948026 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:29.949447 | orchestrator | 2025-09-18 00:42:29 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:29.949469 | orchestrator | 2025-09-18 00:42:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:33.025407 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:33.025460 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:33.025466 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:33.025470 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:33.025475 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:33.025480 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:33.025484 | orchestrator | 2025-09-18 00:42:32 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:33.025489 | orchestrator | 2025-09-18 00:42:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:36.337001 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:36.337092 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state STARTED 2025-09-18 00:42:36.337107 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:36.337413 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:36.337428 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:36.337439 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:36.337450 | orchestrator | 2025-09-18 00:42:36 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:36.337461 | orchestrator | 2025-09-18 00:42:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:39.263039 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:39.263129 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 9c0b2ca7-bf48-4c39-a2d7-e92bda16b654 is in state SUCCESS 2025-09-18 00:42:39.263144 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:39.263156 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:39.263167 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:39.263178 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:39.263189 | orchestrator | 2025-09-18 00:42:39 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state STARTED 2025-09-18 00:42:39.263200 | orchestrator | 2025-09-18 00:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:42.291476 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:42.292299 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:42.293787 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:42.294975 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:42.297492 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:42.298923 | orchestrator | 2025-09-18 00:42:42 | INFO  | Task 17c04a75-02af-4aba-8771-f3478e88cab6 is in state SUCCESS 2025-09-18 00:42:42.298949 | orchestrator | 2025-09-18 00:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:45.327832 | orchestrator | 2025-09-18 00:42:45 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:45.327934 | orchestrator | 2025-09-18 00:42:45 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:45.328698 | orchestrator | 2025-09-18 00:42:45 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:45.330676 | orchestrator | 2025-09-18 00:42:45 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:45.331311 | orchestrator | 2025-09-18 00:42:45 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:45.331332 | orchestrator | 2025-09-18 00:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:48.365038 | orchestrator | 2025-09-18 00:42:48 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:48.371145 | orchestrator | 2025-09-18 00:42:48 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:48.374161 | orchestrator | 2025-09-18 00:42:48 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:48.376249 | orchestrator | 2025-09-18 00:42:48 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:48.378682 | orchestrator | 2025-09-18 00:42:48 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:48.378730 | orchestrator | 2025-09-18 00:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:51.453509 | orchestrator | 2025-09-18 00:42:51 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:51.453818 | orchestrator | 2025-09-18 00:42:51 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:51.454283 | orchestrator | 2025-09-18 00:42:51 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:51.454873 | orchestrator | 2025-09-18 00:42:51 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:51.455363 | orchestrator | 2025-09-18 00:42:51 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:51.455379 | orchestrator | 2025-09-18 00:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:54.492863 | orchestrator | 2025-09-18 00:42:54 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:54.492949 | orchestrator | 2025-09-18 00:42:54 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:54.492964 | orchestrator | 2025-09-18 00:42:54 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:54.492976 | orchestrator | 2025-09-18 00:42:54 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:54.493013 | orchestrator | 2025-09-18 00:42:54 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:54.493024 | orchestrator | 2025-09-18 00:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:42:57.526724 | orchestrator | 2025-09-18 00:42:57 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:42:57.527984 | orchestrator | 2025-09-18 00:42:57 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:42:57.528856 | orchestrator | 2025-09-18 00:42:57 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:42:57.531675 | orchestrator | 2025-09-18 00:42:57 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:42:57.532492 | orchestrator | 2025-09-18 00:42:57 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:42:57.532517 | orchestrator | 2025-09-18 00:42:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:00.605825 | orchestrator | 2025-09-18 00:43:00 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:00.606284 | orchestrator | 2025-09-18 00:43:00 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:00.606900 | orchestrator | 2025-09-18 00:43:00 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:00.608444 | orchestrator | 2025-09-18 00:43:00 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:43:00.609618 | orchestrator | 2025-09-18 00:43:00 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:43:00.609850 | orchestrator | 2025-09-18 00:43:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:03.648451 | orchestrator | 2025-09-18 00:43:03 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:03.651195 | orchestrator | 2025-09-18 00:43:03 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:03.652028 | orchestrator | 2025-09-18 00:43:03 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:03.654001 | orchestrator | 2025-09-18 00:43:03 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:43:03.654133 | orchestrator | 2025-09-18 00:43:03 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:43:03.654154 | orchestrator | 2025-09-18 00:43:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:06.699402 | orchestrator | 2025-09-18 00:43:06 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:06.701346 | orchestrator | 2025-09-18 00:43:06 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:06.702832 | orchestrator | 2025-09-18 00:43:06 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:06.703229 | orchestrator | 2025-09-18 00:43:06 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state STARTED 2025-09-18 00:43:06.704398 | orchestrator | 2025-09-18 00:43:06 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state STARTED 2025-09-18 00:43:06.704538 | orchestrator | 2025-09-18 00:43:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:09.740033 | orchestrator | 2025-09-18 00:43:09 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:09.740818 | orchestrator | 2025-09-18 00:43:09 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:09.741725 | orchestrator | 2025-09-18 00:43:09 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:09.747017 | orchestrator | 2025-09-18 00:43:09.747103 | orchestrator | 2025-09-18 00:43:09.747118 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-18 00:43:09.747131 | orchestrator | 2025-09-18 00:43:09.747143 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-18 00:43:09.747155 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:01.112) 0:00:01.112 **** 2025-09-18 00:43:09.747167 | orchestrator | ok: [testbed-manager] => { 2025-09-18 00:43:09.747202 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-18 00:43:09.747215 | orchestrator | } 2025-09-18 00:43:09.747226 | orchestrator | 2025-09-18 00:43:09.747237 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-18 00:43:09.747248 | orchestrator | Thursday 18 September 2025 00:41:57 +0000 (0:00:00.503) 0:00:01.615 **** 2025-09-18 00:43:09.747259 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.747270 | orchestrator | 2025-09-18 00:43:09.747310 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-18 00:43:09.747322 | orchestrator | Thursday 18 September 2025 00:41:59 +0000 (0:00:02.477) 0:00:04.093 **** 2025-09-18 00:43:09.747333 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-18 00:43:09.747344 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-18 00:43:09.747356 | orchestrator | 2025-09-18 00:43:09.747367 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-18 00:43:09.747378 | orchestrator | Thursday 18 September 2025 00:42:00 +0000 (0:00:01.337) 0:00:05.430 **** 2025-09-18 00:43:09.747389 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.747400 | orchestrator | 2025-09-18 00:43:09.747410 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-18 00:43:09.747421 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:03.244) 0:00:08.674 **** 2025-09-18 00:43:09.747432 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.747443 | orchestrator | 2025-09-18 00:43:09.747454 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-18 00:43:09.747465 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:02.618) 0:00:11.293 **** 2025-09-18 00:43:09.747476 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-18 00:43:09.747487 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.747497 | orchestrator | 2025-09-18 00:43:09.747508 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-18 00:43:09.747519 | orchestrator | Thursday 18 September 2025 00:42:33 +0000 (0:00:27.167) 0:00:38.461 **** 2025-09-18 00:43:09.747530 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.747541 | orchestrator | 2025-09-18 00:43:09.747578 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:43:09.747592 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.747606 | orchestrator | 2025-09-18 00:43:09.747619 | orchestrator | 2025-09-18 00:43:09.747641 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:43:09.747654 | orchestrator | Thursday 18 September 2025 00:42:36 +0000 (0:00:02.958) 0:00:41.420 **** 2025-09-18 00:43:09.747666 | orchestrator | =============================================================================== 2025-09-18 00:43:09.747678 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.17s 2025-09-18 00:43:09.747691 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.24s 2025-09-18 00:43:09.747703 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.96s 2025-09-18 00:43:09.747716 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.62s 2025-09-18 00:43:09.747746 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.48s 2025-09-18 00:43:09.747759 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.34s 2025-09-18 00:43:09.747771 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.50s 2025-09-18 00:43:09.747784 | orchestrator | 2025-09-18 00:43:09.747796 | orchestrator | 2025-09-18 00:43:09.747809 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-18 00:43:09.747822 | orchestrator | 2025-09-18 00:43:09.747835 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-18 00:43:09.747847 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:00.373) 0:00:00.373 **** 2025-09-18 00:43:09.747858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-18 00:43:09.747871 | orchestrator | 2025-09-18 00:43:09.747882 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-18 00:43:09.747892 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:00.779) 0:00:01.153 **** 2025-09-18 00:43:09.747903 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-18 00:43:09.747914 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-18 00:43:09.747925 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-18 00:43:09.747936 | orchestrator | 2025-09-18 00:43:09.747947 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-18 00:43:09.747957 | orchestrator | Thursday 18 September 2025 00:41:59 +0000 (0:00:02.437) 0:00:03.590 **** 2025-09-18 00:43:09.747968 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.747979 | orchestrator | 2025-09-18 00:43:09.747991 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-18 00:43:09.748001 | orchestrator | Thursday 18 September 2025 00:42:01 +0000 (0:00:02.144) 0:00:05.734 **** 2025-09-18 00:43:09.748028 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-18 00:43:09.748039 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.748050 | orchestrator | 2025-09-18 00:43:09.748061 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-18 00:43:09.748072 | orchestrator | Thursday 18 September 2025 00:42:35 +0000 (0:00:33.485) 0:00:39.220 **** 2025-09-18 00:43:09.748083 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.748094 | orchestrator | 2025-09-18 00:43:09.748105 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-18 00:43:09.748115 | orchestrator | Thursday 18 September 2025 00:42:37 +0000 (0:00:02.327) 0:00:41.547 **** 2025-09-18 00:43:09.748126 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.748137 | orchestrator | 2025-09-18 00:43:09.748148 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-18 00:43:09.748158 | orchestrator | Thursday 18 September 2025 00:42:38 +0000 (0:00:01.091) 0:00:42.639 **** 2025-09-18 00:43:09.748169 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.748180 | orchestrator | 2025-09-18 00:43:09.748191 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-18 00:43:09.748201 | orchestrator | Thursday 18 September 2025 00:42:39 +0000 (0:00:01.484) 0:00:44.124 **** 2025-09-18 00:43:09.748212 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.748223 | orchestrator | 2025-09-18 00:43:09.748233 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-18 00:43:09.748244 | orchestrator | Thursday 18 September 2025 00:42:40 +0000 (0:00:00.945) 0:00:45.069 **** 2025-09-18 00:43:09.748255 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.748266 | orchestrator | 2025-09-18 00:43:09.748276 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-18 00:43:09.748287 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:00.562) 0:00:45.631 **** 2025-09-18 00:43:09.748305 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.748316 | orchestrator | 2025-09-18 00:43:09.748327 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:43:09.748338 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.748348 | orchestrator | 2025-09-18 00:43:09.748359 | orchestrator | 2025-09-18 00:43:09.748370 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:43:09.748381 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:00.304) 0:00:45.936 **** 2025-09-18 00:43:09.748391 | orchestrator | =============================================================================== 2025-09-18 00:43:09.748402 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.49s 2025-09-18 00:43:09.748413 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.44s 2025-09-18 00:43:09.748424 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.33s 2025-09-18 00:43:09.748440 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.14s 2025-09-18 00:43:09.748451 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.48s 2025-09-18 00:43:09.748462 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.09s 2025-09-18 00:43:09.748472 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.95s 2025-09-18 00:43:09.748483 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.78s 2025-09-18 00:43:09.748494 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.56s 2025-09-18 00:43:09.748505 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.30s 2025-09-18 00:43:09.748515 | orchestrator | 2025-09-18 00:43:09.748526 | orchestrator | 2025-09-18 00:43:09.748537 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:43:09.748567 | orchestrator | 2025-09-18 00:43:09.748578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:43:09.748589 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:00.715) 0:00:00.715 **** 2025-09-18 00:43:09.748600 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-18 00:43:09.748611 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-18 00:43:09.748622 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-18 00:43:09.748633 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-18 00:43:09.748644 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-18 00:43:09.748655 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-18 00:43:09.748666 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-18 00:43:09.748676 | orchestrator | 2025-09-18 00:43:09.748687 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-18 00:43:09.748698 | orchestrator | 2025-09-18 00:43:09.748709 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-18 00:43:09.748720 | orchestrator | Thursday 18 September 2025 00:41:58 +0000 (0:00:02.429) 0:00:03.145 **** 2025-09-18 00:43:09.748745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:43:09.748764 | orchestrator | 2025-09-18 00:43:09.748775 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-18 00:43:09.748786 | orchestrator | Thursday 18 September 2025 00:42:00 +0000 (0:00:01.371) 0:00:04.516 **** 2025-09-18 00:43:09.748797 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:43:09.748808 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:43:09.748819 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:43:09.748830 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.748848 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:43:09.748865 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:43:09.748876 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:43:09.748887 | orchestrator | 2025-09-18 00:43:09.748898 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-18 00:43:09.748909 | orchestrator | Thursday 18 September 2025 00:42:01 +0000 (0:00:01.331) 0:00:05.848 **** 2025-09-18 00:43:09.748920 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:43:09.748930 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:43:09.748941 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:43:09.748952 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:43:09.748962 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:43:09.748973 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:43:09.748983 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.748994 | orchestrator | 2025-09-18 00:43:09.749005 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-18 00:43:09.749016 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:03.097) 0:00:08.946 **** 2025-09-18 00:43:09.749027 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:43:09.749038 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:43:09.749049 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:43:09.749059 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:43:09.749070 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:43:09.749080 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:43:09.749091 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.749102 | orchestrator | 2025-09-18 00:43:09.749112 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-18 00:43:09.749123 | orchestrator | Thursday 18 September 2025 00:42:07 +0000 (0:00:02.647) 0:00:11.594 **** 2025-09-18 00:43:09.749134 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:43:09.749144 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:43:09.749155 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:43:09.749166 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:43:09.749176 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:43:09.749187 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:43:09.749197 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.749208 | orchestrator | 2025-09-18 00:43:09.749219 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-18 00:43:09.749230 | orchestrator | Thursday 18 September 2025 00:42:18 +0000 (0:00:10.902) 0:00:22.497 **** 2025-09-18 00:43:09.749240 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:43:09.749251 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:43:09.749262 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:43:09.749272 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:43:09.749283 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:43:09.749293 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:43:09.749304 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.749315 | orchestrator | 2025-09-18 00:43:09.749325 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-18 00:43:09.749336 | orchestrator | Thursday 18 September 2025 00:42:48 +0000 (0:00:30.515) 0:00:53.012 **** 2025-09-18 00:43:09.749352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:43:09.749365 | orchestrator | 2025-09-18 00:43:09.749376 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-18 00:43:09.749387 | orchestrator | Thursday 18 September 2025 00:42:50 +0000 (0:00:01.681) 0:00:54.694 **** 2025-09-18 00:43:09.749398 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-18 00:43:09.749409 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-18 00:43:09.749419 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-18 00:43:09.749430 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-18 00:43:09.749448 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-18 00:43:09.749459 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-18 00:43:09.749470 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-18 00:43:09.749480 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-18 00:43:09.749491 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-18 00:43:09.749502 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-18 00:43:09.749512 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-18 00:43:09.749523 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-18 00:43:09.749534 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-18 00:43:09.749604 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-18 00:43:09.749618 | orchestrator | 2025-09-18 00:43:09.749629 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-18 00:43:09.749641 | orchestrator | Thursday 18 September 2025 00:42:55 +0000 (0:00:04.909) 0:00:59.604 **** 2025-09-18 00:43:09.749652 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.749663 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:43:09.749674 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:43:09.749685 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:43:09.749695 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:43:09.749706 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:43:09.749717 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:43:09.749727 | orchestrator | 2025-09-18 00:43:09.749739 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-18 00:43:09.749749 | orchestrator | Thursday 18 September 2025 00:42:56 +0000 (0:00:01.092) 0:01:00.697 **** 2025-09-18 00:43:09.749760 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.749771 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:43:09.749782 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:43:09.749793 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:43:09.749804 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:43:09.749814 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:43:09.749825 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:43:09.749836 | orchestrator | 2025-09-18 00:43:09.749847 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-18 00:43:09.749864 | orchestrator | Thursday 18 September 2025 00:42:57 +0000 (0:00:01.268) 0:01:01.965 **** 2025-09-18 00:43:09.749875 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:43:09.749886 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:43:09.749897 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:43:09.749908 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.749919 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:43:09.749929 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:43:09.749940 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:43:09.749951 | orchestrator | 2025-09-18 00:43:09.749962 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-18 00:43:09.749973 | orchestrator | Thursday 18 September 2025 00:42:59 +0000 (0:00:01.553) 0:01:03.519 **** 2025-09-18 00:43:09.749984 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:43:09.749995 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:43:09.750006 | orchestrator | ok: [testbed-manager] 2025-09-18 00:43:09.750088 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:43:09.750106 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:43:09.750122 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:43:09.750137 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:43:09.750152 | orchestrator | 2025-09-18 00:43:09.750168 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-18 00:43:09.750183 | orchestrator | Thursday 18 September 2025 00:43:01 +0000 (0:00:02.062) 0:01:05.582 **** 2025-09-18 00:43:09.750199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-18 00:43:09.750228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:43:09.750245 | orchestrator | 2025-09-18 00:43:09.750261 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-18 00:43:09.750277 | orchestrator | Thursday 18 September 2025 00:43:02 +0000 (0:00:01.638) 0:01:07.220 **** 2025-09-18 00:43:09.750288 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.750297 | orchestrator | 2025-09-18 00:43:09.750307 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-18 00:43:09.750317 | orchestrator | Thursday 18 September 2025 00:43:04 +0000 (0:00:01.747) 0:01:08.968 **** 2025-09-18 00:43:09.750326 | orchestrator | changed: [testbed-manager] 2025-09-18 00:43:09.750336 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:43:09.750345 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:43:09.750354 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:43:09.750364 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:43:09.750373 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:43:09.750383 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:43:09.750392 | orchestrator | 2025-09-18 00:43:09.750402 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:43:09.750411 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750421 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750431 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750441 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750451 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750460 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750470 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:43:09.750479 | orchestrator | 2025-09-18 00:43:09.750489 | orchestrator | 2025-09-18 00:43:09.750499 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:43:09.750508 | orchestrator | Thursday 18 September 2025 00:43:07 +0000 (0:00:03.088) 0:01:12.056 **** 2025-09-18 00:43:09.750518 | orchestrator | =============================================================================== 2025-09-18 00:43:09.750528 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 30.52s 2025-09-18 00:43:09.750537 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.90s 2025-09-18 00:43:09.750564 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.91s 2025-09-18 00:43:09.750574 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.10s 2025-09-18 00:43:09.750584 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.09s 2025-09-18 00:43:09.750593 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.65s 2025-09-18 00:43:09.750641 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.43s 2025-09-18 00:43:09.750652 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.06s 2025-09-18 00:43:09.750661 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.75s 2025-09-18 00:43:09.750679 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.68s 2025-09-18 00:43:09.750689 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.64s 2025-09-18 00:43:09.750707 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.55s 2025-09-18 00:43:09.750716 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.37s 2025-09-18 00:43:09.750726 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.33s 2025-09-18 00:43:09.750736 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.27s 2025-09-18 00:43:09.750745 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.09s 2025-09-18 00:43:09.750755 | orchestrator | 2025-09-18 00:43:09 | INFO  | Task 60dff364-bddd-493b-bbf7-f05ba24388da is in state SUCCESS 2025-09-18 00:43:09.750765 | orchestrator | 2025-09-18 00:43:09 | INFO  | Task 19c110ab-a244-4ed1-bec9-00f070e7a341 is in state SUCCESS 2025-09-18 00:43:09.750775 | orchestrator | 2025-09-18 00:43:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:12.788665 | orchestrator | 2025-09-18 00:43:12 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:12.789513 | orchestrator | 2025-09-18 00:43:12 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:12.791523 | orchestrator | 2025-09-18 00:43:12 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:12.791988 | orchestrator | 2025-09-18 00:43:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:15.830721 | orchestrator | 2025-09-18 00:43:15 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:15.830880 | orchestrator | 2025-09-18 00:43:15 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:15.831976 | orchestrator | 2025-09-18 00:43:15 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:15.832036 | orchestrator | 2025-09-18 00:43:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:18.881494 | orchestrator | 2025-09-18 00:43:18 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:18.885092 | orchestrator | 2025-09-18 00:43:18 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:18.887476 | orchestrator | 2025-09-18 00:43:18 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:18.887603 | orchestrator | 2025-09-18 00:43:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:21.934790 | orchestrator | 2025-09-18 00:43:21 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:21.938122 | orchestrator | 2025-09-18 00:43:21 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:21.940688 | orchestrator | 2025-09-18 00:43:21 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:21.941392 | orchestrator | 2025-09-18 00:43:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:24.980322 | orchestrator | 2025-09-18 00:43:24 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:24.982203 | orchestrator | 2025-09-18 00:43:24 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:24.984991 | orchestrator | 2025-09-18 00:43:24 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:24.985070 | orchestrator | 2025-09-18 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:28.031926 | orchestrator | 2025-09-18 00:43:28 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:28.033686 | orchestrator | 2025-09-18 00:43:28 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:28.034247 | orchestrator | 2025-09-18 00:43:28 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:28.034695 | orchestrator | 2025-09-18 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:31.082895 | orchestrator | 2025-09-18 00:43:31 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:31.083757 | orchestrator | 2025-09-18 00:43:31 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:31.084727 | orchestrator | 2025-09-18 00:43:31 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:31.084760 | orchestrator | 2025-09-18 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:34.134499 | orchestrator | 2025-09-18 00:43:34 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:34.138585 | orchestrator | 2025-09-18 00:43:34 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:34.138643 | orchestrator | 2025-09-18 00:43:34 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:34.138806 | orchestrator | 2025-09-18 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:37.182367 | orchestrator | 2025-09-18 00:43:37 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:37.183105 | orchestrator | 2025-09-18 00:43:37 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:37.184079 | orchestrator | 2025-09-18 00:43:37 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:37.184113 | orchestrator | 2025-09-18 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:40.241169 | orchestrator | 2025-09-18 00:43:40 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:40.243170 | orchestrator | 2025-09-18 00:43:40 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:40.243927 | orchestrator | 2025-09-18 00:43:40 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:40.243965 | orchestrator | 2025-09-18 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:43.296516 | orchestrator | 2025-09-18 00:43:43 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:43.298402 | orchestrator | 2025-09-18 00:43:43 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:43.300176 | orchestrator | 2025-09-18 00:43:43 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:43.300198 | orchestrator | 2025-09-18 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:46.361397 | orchestrator | 2025-09-18 00:43:46 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:46.362184 | orchestrator | 2025-09-18 00:43:46 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:46.363959 | orchestrator | 2025-09-18 00:43:46 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:46.363984 | orchestrator | 2025-09-18 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:49.416148 | orchestrator | 2025-09-18 00:43:49 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:49.416992 | orchestrator | 2025-09-18 00:43:49 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:49.418178 | orchestrator | 2025-09-18 00:43:49 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:49.418263 | orchestrator | 2025-09-18 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:52.485811 | orchestrator | 2025-09-18 00:43:52 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:52.487720 | orchestrator | 2025-09-18 00:43:52 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:52.489488 | orchestrator | 2025-09-18 00:43:52 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:52.489516 | orchestrator | 2025-09-18 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:55.548024 | orchestrator | 2025-09-18 00:43:55 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:55.548122 | orchestrator | 2025-09-18 00:43:55 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:55.549627 | orchestrator | 2025-09-18 00:43:55 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:55.549650 | orchestrator | 2025-09-18 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:43:58.600957 | orchestrator | 2025-09-18 00:43:58 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:43:58.602715 | orchestrator | 2025-09-18 00:43:58 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:43:58.605598 | orchestrator | 2025-09-18 00:43:58 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:43:58.605624 | orchestrator | 2025-09-18 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:01.657526 | orchestrator | 2025-09-18 00:44:01 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:01.659659 | orchestrator | 2025-09-18 00:44:01 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:44:01.662061 | orchestrator | 2025-09-18 00:44:01 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:01.662093 | orchestrator | 2025-09-18 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:04.704966 | orchestrator | 2025-09-18 00:44:04 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:04.705930 | orchestrator | 2025-09-18 00:44:04 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:44:04.707368 | orchestrator | 2025-09-18 00:44:04 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:04.707649 | orchestrator | 2025-09-18 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:07.748063 | orchestrator | 2025-09-18 00:44:07 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:07.748172 | orchestrator | 2025-09-18 00:44:07 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:44:07.748452 | orchestrator | 2025-09-18 00:44:07 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:07.748560 | orchestrator | 2025-09-18 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:10.788296 | orchestrator | 2025-09-18 00:44:10 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:10.789743 | orchestrator | 2025-09-18 00:44:10 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state STARTED 2025-09-18 00:44:10.790885 | orchestrator | 2025-09-18 00:44:10 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:10.791034 | orchestrator | 2025-09-18 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:13.832614 | orchestrator | 2025-09-18 00:44:13 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:13.842643 | orchestrator | 2025-09-18 00:44:13.842719 | orchestrator | 2025-09-18 00:44:13.842733 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-18 00:44:13.842745 | orchestrator | 2025-09-18 00:44:13.842756 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-18 00:44:13.842768 | orchestrator | Thursday 18 September 2025 00:42:12 +0000 (0:00:00.188) 0:00:00.188 **** 2025-09-18 00:44:13.842779 | orchestrator | ok: [testbed-manager] 2025-09-18 00:44:13.842870 | orchestrator | 2025-09-18 00:44:13.842892 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-18 00:44:13.842904 | orchestrator | Thursday 18 September 2025 00:42:13 +0000 (0:00:01.482) 0:00:01.671 **** 2025-09-18 00:44:13.842915 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-18 00:44:13.842926 | orchestrator | 2025-09-18 00:44:13.842937 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-18 00:44:13.842949 | orchestrator | Thursday 18 September 2025 00:42:14 +0000 (0:00:00.630) 0:00:02.302 **** 2025-09-18 00:44:13.842960 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.842971 | orchestrator | 2025-09-18 00:44:13.842982 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-18 00:44:13.842993 | orchestrator | Thursday 18 September 2025 00:42:15 +0000 (0:00:01.285) 0:00:03.588 **** 2025-09-18 00:44:13.843004 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-18 00:44:13.843015 | orchestrator | ok: [testbed-manager] 2025-09-18 00:44:13.843026 | orchestrator | 2025-09-18 00:44:13.843037 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-18 00:44:13.843116 | orchestrator | Thursday 18 September 2025 00:43:00 +0000 (0:00:44.919) 0:00:48.507 **** 2025-09-18 00:44:13.843129 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.843140 | orchestrator | 2025-09-18 00:44:13.843151 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:44:13.843162 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:13.843176 | orchestrator | 2025-09-18 00:44:13.843187 | orchestrator | 2025-09-18 00:44:13.843198 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:44:13.843209 | orchestrator | Thursday 18 September 2025 00:43:08 +0000 (0:00:07.942) 0:00:56.450 **** 2025-09-18 00:44:13.843220 | orchestrator | =============================================================================== 2025-09-18 00:44:13.843231 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 44.92s 2025-09-18 00:44:13.843242 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.94s 2025-09-18 00:44:13.843253 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.48s 2025-09-18 00:44:13.843264 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.29s 2025-09-18 00:44:13.843275 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2025-09-18 00:44:13.843286 | orchestrator | 2025-09-18 00:44:13.843297 | orchestrator | 2025-09-18 00:44:13.843309 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-18 00:44:13.843320 | orchestrator | 2025-09-18 00:44:13.843331 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-18 00:44:13.843342 | orchestrator | Thursday 18 September 2025 00:41:49 +0000 (0:00:00.219) 0:00:00.219 **** 2025-09-18 00:44:13.843353 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:44:13.843385 | orchestrator | 2025-09-18 00:44:13.843397 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-18 00:44:13.843408 | orchestrator | Thursday 18 September 2025 00:41:50 +0000 (0:00:01.158) 0:00:01.377 **** 2025-09-18 00:44:13.843418 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843429 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843440 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843451 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843462 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843473 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843484 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843495 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843506 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843517 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843528 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 00:44:13.843539 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843551 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843562 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843593 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843604 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843661 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843674 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 00:44:13.843686 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843702 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843714 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 00:44:13.843725 | orchestrator | 2025-09-18 00:44:13.843736 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-18 00:44:13.843747 | orchestrator | Thursday 18 September 2025 00:41:54 +0000 (0:00:04.014) 0:00:05.392 **** 2025-09-18 00:44:13.843761 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:44:13.843775 | orchestrator | 2025-09-18 00:44:13.843788 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-18 00:44:13.843800 | orchestrator | Thursday 18 September 2025 00:41:55 +0000 (0:00:01.092) 0:00:06.484 **** 2025-09-18 00:44:13.843817 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.843960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.843974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.843994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844021 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844193 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.844205 | orchestrator | 2025-09-18 00:44:13.844216 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-18 00:44:13.844228 | orchestrator | Thursday 18 September 2025 00:41:59 +0000 (0:00:04.369) 0:00:10.854 **** 2025-09-18 00:44:13.844270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844321 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:44:13.844333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844368 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:44:13.844379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844482 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:44:13.844493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844505 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:44:13.844516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844550 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:44:13.844564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844722 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:44:13.844741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844793 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:44:13.844810 | orchestrator | 2025-09-18 00:44:13.844827 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-18 00:44:13.844845 | orchestrator | Thursday 18 September 2025 00:42:01 +0000 (0:00:01.982) 0:00:12.837 **** 2025-09-18 00:44:13.844863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844881 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844923 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.844973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.844991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845011 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:44:13.845030 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:44:13.845048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.845067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845089 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:44:13.845101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.845134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845157 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:44:13.845168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.845180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845202 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:44:13.845213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.845233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845284 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:44:13.845306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 00:44:13.845324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.845359 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:44:13.845376 | orchestrator | 2025-09-18 00:44:13.845393 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-18 00:44:13.845410 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:02.435) 0:00:15.272 **** 2025-09-18 00:44:13.845428 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:44:13.845445 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:44:13.845464 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:44:13.845483 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:44:13.845502 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:44:13.845514 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:44:13.845525 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:44:13.845536 | orchestrator | 2025-09-18 00:44:13.845547 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-18 00:44:13.845558 | orchestrator | Thursday 18 September 2025 00:42:05 +0000 (0:00:00.836) 0:00:16.108 **** 2025-09-18 00:44:13.845640 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:44:13.845654 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:44:13.845665 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:44:13.845675 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:44:13.845686 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:44:13.845697 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:44:13.845708 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:44:13.845729 | orchestrator | 2025-09-18 00:44:13.845740 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-18 00:44:13.845751 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:01.136) 0:00:17.245 **** 2025-09-18 00:44:13.845763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845775 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.845899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845918 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.845988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846086 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.846105 | orchestrator | 2025-09-18 00:44:13.846115 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-18 00:44:13.846125 | orchestrator | Thursday 18 September 2025 00:42:13 +0000 (0:00:07.393) 0:00:24.639 **** 2025-09-18 00:44:13.846135 | orchestrator | [WARNING]: Skipped 2025-09-18 00:44:13.846146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-18 00:44:13.846156 | orchestrator | to this access issue: 2025-09-18 00:44:13.846166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-18 00:44:13.846182 | orchestrator | directory 2025-09-18 00:44:13.846192 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:44:13.846201 | orchestrator | 2025-09-18 00:44:13.846211 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-18 00:44:13.846221 | orchestrator | Thursday 18 September 2025 00:42:14 +0000 (0:00:01.108) 0:00:25.747 **** 2025-09-18 00:44:13.846230 | orchestrator | [WARNING]: Skipped 2025-09-18 00:44:13.846240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-18 00:44:13.846250 | orchestrator | to this access issue: 2025-09-18 00:44:13.846259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-18 00:44:13.846269 | orchestrator | directory 2025-09-18 00:44:13.846279 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:44:13.846288 | orchestrator | 2025-09-18 00:44:13.846298 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-18 00:44:13.846313 | orchestrator | Thursday 18 September 2025 00:42:15 +0000 (0:00:01.118) 0:00:26.866 **** 2025-09-18 00:44:13.846329 | orchestrator | [WARNING]: Skipped 2025-09-18 00:44:13.846345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-18 00:44:13.846361 | orchestrator | to this access issue: 2025-09-18 00:44:13.846377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-18 00:44:13.846393 | orchestrator | directory 2025-09-18 00:44:13.846409 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:44:13.846424 | orchestrator | 2025-09-18 00:44:13.846438 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-18 00:44:13.846452 | orchestrator | Thursday 18 September 2025 00:42:16 +0000 (0:00:00.922) 0:00:27.789 **** 2025-09-18 00:44:13.846466 | orchestrator | [WARNING]: Skipped 2025-09-18 00:44:13.846481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-18 00:44:13.846496 | orchestrator | to this access issue: 2025-09-18 00:44:13.846512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-18 00:44:13.846529 | orchestrator | directory 2025-09-18 00:44:13.846541 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:44:13.846551 | orchestrator | 2025-09-18 00:44:13.846561 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-18 00:44:13.846596 | orchestrator | Thursday 18 September 2025 00:42:17 +0000 (0:00:00.753) 0:00:28.542 **** 2025-09-18 00:44:13.846606 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.846616 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.846626 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.846635 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.846645 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.846655 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.846664 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.846674 | orchestrator | 2025-09-18 00:44:13.846683 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-18 00:44:13.846693 | orchestrator | Thursday 18 September 2025 00:42:21 +0000 (0:00:03.906) 0:00:32.449 **** 2025-09-18 00:44:13.846703 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846713 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846741 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846751 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846761 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846785 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 00:44:13.846795 | orchestrator | 2025-09-18 00:44:13.846804 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-18 00:44:13.846814 | orchestrator | Thursday 18 September 2025 00:42:24 +0000 (0:00:03.117) 0:00:35.567 **** 2025-09-18 00:44:13.846823 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.846833 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.846843 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.846852 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.846862 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.846871 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.846880 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.846890 | orchestrator | 2025-09-18 00:44:13.846899 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-18 00:44:13.846909 | orchestrator | Thursday 18 September 2025 00:42:27 +0000 (0:00:02.973) 0:00:38.541 **** 2025-09-18 00:44:13.846922 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.846939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.846955 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.846971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.846988 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847014 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847047 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.847069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.847089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847099 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.847132 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.847158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847167 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:44:13.847187 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', '2025-09-18 00:44:13 | INFO  | Task 88e9dac7-aa80-44a7-ba3c-3b2a3bff27e2 is in state SUCCESS 2025-09-18 00:44:13.847199 | orchestrator | environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847211 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847228 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847238 | orchestrator | 2025-09-18 00:44:13.847248 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-18 00:44:13.847258 | orchestrator | Thursday 18 September 2025 00:42:30 +0000 (0:00:02.849) 0:00:41.390 **** 2025-09-18 00:44:13.847268 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847294 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847303 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847312 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847326 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847336 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 00:44:13.847345 | orchestrator | 2025-09-18 00:44:13.847355 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-18 00:44:13.847365 | orchestrator | Thursday 18 September 2025 00:42:32 +0000 (0:00:02.491) 0:00:43.882 **** 2025-09-18 00:44:13.847374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847384 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847393 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847403 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847412 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847422 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847431 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 00:44:13.847441 | orchestrator | 2025-09-18 00:44:13.847450 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-18 00:44:13.847459 | orchestrator | Thursday 18 September 2025 00:42:35 +0000 (0:00:02.896) 0:00:46.778 **** 2025-09-18 00:44:13.847469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847515 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 00:44:13.847560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847656 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847761 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:44:13.847804 | orchestrator | 2025-09-18 00:44:13.847814 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-18 00:44:13.847823 | orchestrator | Thursday 18 September 2025 00:42:38 +0000 (0:00:03.197) 0:00:49.976 **** 2025-09-18 00:44:13.847833 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.847843 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.847853 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.847866 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.847876 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.847886 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.847895 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.847905 | orchestrator | 2025-09-18 00:44:13.847915 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-18 00:44:13.847930 | orchestrator | Thursday 18 September 2025 00:42:40 +0000 (0:00:01.667) 0:00:51.644 **** 2025-09-18 00:44:13.847945 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.847961 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.847976 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.847991 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.848007 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.848023 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.848039 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.848055 | orchestrator | 2025-09-18 00:44:13.848072 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848088 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:01.242) 0:00:52.886 **** 2025-09-18 00:44:13.848102 | orchestrator | 2025-09-18 00:44:13.848112 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848122 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:00.060) 0:00:52.947 **** 2025-09-18 00:44:13.848132 | orchestrator | 2025-09-18 00:44:13.848150 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848160 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:00.062) 0:00:53.009 **** 2025-09-18 00:44:13.848169 | orchestrator | 2025-09-18 00:44:13.848179 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848188 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:00.061) 0:00:53.071 **** 2025-09-18 00:44:13.848198 | orchestrator | 2025-09-18 00:44:13.848208 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848217 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:00.159) 0:00:53.230 **** 2025-09-18 00:44:13.848227 | orchestrator | 2025-09-18 00:44:13.848236 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848246 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:00.058) 0:00:53.289 **** 2025-09-18 00:44:13.848255 | orchestrator | 2025-09-18 00:44:13.848265 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 00:44:13.848275 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:00.057) 0:00:53.347 **** 2025-09-18 00:44:13.848285 | orchestrator | 2025-09-18 00:44:13.848294 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-18 00:44:13.848304 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:00.080) 0:00:53.428 **** 2025-09-18 00:44:13.848314 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.848323 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.848333 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.848343 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.848352 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.848362 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.848371 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.848381 | orchestrator | 2025-09-18 00:44:13.848390 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-18 00:44:13.848400 | orchestrator | Thursday 18 September 2025 00:43:17 +0000 (0:00:35.325) 0:01:28.753 **** 2025-09-18 00:44:13.848410 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.848419 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.848429 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.848438 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.848448 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.848458 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.848467 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.848477 | orchestrator | 2025-09-18 00:44:13.848486 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-18 00:44:13.848496 | orchestrator | Thursday 18 September 2025 00:43:59 +0000 (0:00:41.656) 0:02:10.410 **** 2025-09-18 00:44:13.848506 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:44:13.848515 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:44:13.848525 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:44:13.848535 | orchestrator | ok: [testbed-manager] 2025-09-18 00:44:13.848544 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:44:13.848554 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:44:13.848563 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:44:13.848600 | orchestrator | 2025-09-18 00:44:13.848610 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-18 00:44:13.848620 | orchestrator | Thursday 18 September 2025 00:44:02 +0000 (0:00:02.731) 0:02:13.141 **** 2025-09-18 00:44:13.848629 | orchestrator | changed: [testbed-manager] 2025-09-18 00:44:13.848639 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:13.848649 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:13.848658 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:13.848668 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:44:13.848678 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:44:13.848687 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:44:13.848697 | orchestrator | 2025-09-18 00:44:13.848707 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:44:13.848725 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848748 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848764 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848782 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848797 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848821 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848835 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 00:44:13.848851 | orchestrator | 2025-09-18 00:44:13.848866 | orchestrator | 2025-09-18 00:44:13.848881 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:44:13.848897 | orchestrator | Thursday 18 September 2025 00:44:12 +0000 (0:00:10.307) 0:02:23.449 **** 2025-09-18 00:44:13.848912 | orchestrator | =============================================================================== 2025-09-18 00:44:13.848928 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 41.66s 2025-09-18 00:44:13.848943 | orchestrator | common : Restart fluentd container ------------------------------------- 35.33s 2025-09-18 00:44:13.848959 | orchestrator | common : Restart cron container ---------------------------------------- 10.31s 2025-09-18 00:44:13.848976 | orchestrator | common : Copying over config.json files for services -------------------- 7.39s 2025-09-18 00:44:13.848991 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.37s 2025-09-18 00:44:13.849007 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.01s 2025-09-18 00:44:13.849023 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.91s 2025-09-18 00:44:13.849039 | orchestrator | common : Check common containers ---------------------------------------- 3.20s 2025-09-18 00:44:13.849055 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.12s 2025-09-18 00:44:13.849071 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.97s 2025-09-18 00:44:13.849087 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.90s 2025-09-18 00:44:13.849103 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.85s 2025-09-18 00:44:13.849120 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.73s 2025-09-18 00:44:13.849136 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.49s 2025-09-18 00:44:13.849152 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.44s 2025-09-18 00:44:13.849168 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.98s 2025-09-18 00:44:13.849185 | orchestrator | common : Creating log volume -------------------------------------------- 1.67s 2025-09-18 00:44:13.849202 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.24s 2025-09-18 00:44:13.849217 | orchestrator | common : include_tasks -------------------------------------------------- 1.16s 2025-09-18 00:44:13.849232 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.14s 2025-09-18 00:44:13.849248 | orchestrator | 2025-09-18 00:44:13 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:13.849282 | orchestrator | 2025-09-18 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:16.880506 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:16.880897 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:16.881363 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state STARTED 2025-09-18 00:44:16.881863 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:16.882581 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:16.883090 | orchestrator | 2025-09-18 00:44:16 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:16.883277 | orchestrator | 2025-09-18 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:19.927241 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:19.929056 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:19.931286 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state STARTED 2025-09-18 00:44:19.937669 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:19.938349 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:19.938958 | orchestrator | 2025-09-18 00:44:19 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:19.939109 | orchestrator | 2025-09-18 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:22.969349 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:22.969428 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:22.972854 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state STARTED 2025-09-18 00:44:22.973499 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:22.974078 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:22.975105 | orchestrator | 2025-09-18 00:44:22 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:22.975128 | orchestrator | 2025-09-18 00:44:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:26.005547 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:26.005787 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:26.006543 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state STARTED 2025-09-18 00:44:26.007218 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:26.007936 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:26.011628 | orchestrator | 2025-09-18 00:44:26 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:26.011688 | orchestrator | 2025-09-18 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:29.034900 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:29.038273 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:29.038351 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state STARTED 2025-09-18 00:44:29.038375 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:29.038394 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:29.038417 | orchestrator | 2025-09-18 00:44:29 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:29.038441 | orchestrator | 2025-09-18 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:32.092799 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:32.092950 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:32.092966 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task dff9f79a-b1f9-4dd3-bd98-9c8a4e635f50 is in state SUCCESS 2025-09-18 00:44:32.092977 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:32.092988 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:32.092998 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:32.093009 | orchestrator | 2025-09-18 00:44:32 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:32.093020 | orchestrator | 2025-09-18 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:35.135910 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:35.135995 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:35.136010 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:35.136022 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:35.136033 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:35.136864 | orchestrator | 2025-09-18 00:44:35 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:35.137478 | orchestrator | 2025-09-18 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:38.165446 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:38.165691 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:38.166287 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:38.166935 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:38.167658 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:38.169812 | orchestrator | 2025-09-18 00:44:38 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:38.169906 | orchestrator | 2025-09-18 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:41.212790 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:41.215293 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:41.216071 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:41.216725 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:41.217484 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:41.218405 | orchestrator | 2025-09-18 00:44:41 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:41.218430 | orchestrator | 2025-09-18 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:44.244622 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:44.244918 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:44.245949 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:44.246975 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:44.247957 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:44.248814 | orchestrator | 2025-09-18 00:44:44 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:44.249038 | orchestrator | 2025-09-18 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:47.284796 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:47.285202 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:47.285975 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:47.286640 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:47.287698 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:47.288375 | orchestrator | 2025-09-18 00:44:47 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:47.288410 | orchestrator | 2025-09-18 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:50.316983 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:50.317257 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:50.317995 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state STARTED 2025-09-18 00:44:50.318852 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:50.319750 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:50.320638 | orchestrator | 2025-09-18 00:44:50 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:50.320694 | orchestrator | 2025-09-18 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:53.370246 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:53.370921 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:53.371756 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task b2970d0b-5804-42a8-bfc2-819e42fc470f is in state SUCCESS 2025-09-18 00:44:53.373173 | orchestrator | 2025-09-18 00:44:53.373202 | orchestrator | 2025-09-18 00:44:53.373217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:44:53.373231 | orchestrator | 2025-09-18 00:44:53.373245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:44:53.373259 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:00.540) 0:00:00.540 **** 2025-09-18 00:44:53.373271 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:44:53.373373 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:44:53.373389 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:44:53.373400 | orchestrator | 2025-09-18 00:44:53.373412 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:44:53.373424 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.508) 0:00:01.048 **** 2025-09-18 00:44:53.373436 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-18 00:44:53.373448 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-18 00:44:53.373460 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-18 00:44:53.373471 | orchestrator | 2025-09-18 00:44:53.373483 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-18 00:44:53.373495 | orchestrator | 2025-09-18 00:44:53.373506 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-18 00:44:53.373518 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:01.023) 0:00:02.072 **** 2025-09-18 00:44:53.373530 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:44:53.373541 | orchestrator | 2025-09-18 00:44:53.373580 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-18 00:44:53.373591 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:00.994) 0:00:03.067 **** 2025-09-18 00:44:53.373602 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-18 00:44:53.373613 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-18 00:44:53.373624 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-18 00:44:53.373635 | orchestrator | 2025-09-18 00:44:53.373645 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-18 00:44:53.373656 | orchestrator | Thursday 18 September 2025 00:44:23 +0000 (0:00:01.094) 0:00:04.162 **** 2025-09-18 00:44:53.373667 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-18 00:44:53.373678 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-18 00:44:53.373689 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-18 00:44:53.373700 | orchestrator | 2025-09-18 00:44:53.373711 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-18 00:44:53.373722 | orchestrator | Thursday 18 September 2025 00:44:26 +0000 (0:00:03.047) 0:00:07.209 **** 2025-09-18 00:44:53.373733 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:53.373743 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:53.373754 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:53.373765 | orchestrator | 2025-09-18 00:44:53.373776 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-18 00:44:53.373787 | orchestrator | Thursday 18 September 2025 00:44:28 +0000 (0:00:01.992) 0:00:09.201 **** 2025-09-18 00:44:53.373820 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:53.373832 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:53.373843 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:53.373854 | orchestrator | 2025-09-18 00:44:53.373865 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:44:53.373876 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.373888 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.373899 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.373910 | orchestrator | 2025-09-18 00:44:53.373921 | orchestrator | 2025-09-18 00:44:53.373932 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:44:53.373942 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:02.232) 0:00:11.433 **** 2025-09-18 00:44:53.373953 | orchestrator | =============================================================================== 2025-09-18 00:44:53.373964 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.05s 2025-09-18 00:44:53.373975 | orchestrator | memcached : Restart memcached container --------------------------------- 2.23s 2025-09-18 00:44:53.373985 | orchestrator | memcached : Check memcached container ----------------------------------- 1.99s 2025-09-18 00:44:53.373996 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.09s 2025-09-18 00:44:53.374009 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-09-18 00:44:53.374076 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.99s 2025-09-18 00:44:53.374101 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-09-18 00:44:53.374114 | orchestrator | 2025-09-18 00:44:53.374126 | orchestrator | 2025-09-18 00:44:53.374139 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:44:53.374151 | orchestrator | 2025-09-18 00:44:53.374163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:44:53.374176 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:00.612) 0:00:00.613 **** 2025-09-18 00:44:53.374189 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:44:53.374201 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:44:53.374214 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:44:53.374226 | orchestrator | 2025-09-18 00:44:53.374239 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:44:53.374265 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.658) 0:00:01.271 **** 2025-09-18 00:44:53.374278 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-18 00:44:53.374383 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-18 00:44:53.374397 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-18 00:44:53.374409 | orchestrator | 2025-09-18 00:44:53.374420 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-18 00:44:53.374430 | orchestrator | 2025-09-18 00:44:53.374442 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-18 00:44:53.374453 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.632) 0:00:01.904 **** 2025-09-18 00:44:53.374464 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:44:53.374476 | orchestrator | 2025-09-18 00:44:53.374487 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-18 00:44:53.374498 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:00.424) 0:00:02.328 **** 2025-09-18 00:44:53.374511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374642 | orchestrator | 2025-09-18 00:44:53.374653 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-18 00:44:53.374664 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:01.582) 0:00:03.911 **** 2025-09-18 00:44:53.374676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374762 | orchestrator | 2025-09-18 00:44:53.374786 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-18 00:44:53.374798 | orchestrator | Thursday 18 September 2025 00:44:26 +0000 (0:00:03.265) 0:00:07.176 **** 2025-09-18 00:44:53.374809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374889 | orchestrator | 2025-09-18 00:44:53.374905 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-18 00:44:53.374917 | orchestrator | Thursday 18 September 2025 00:44:28 +0000 (0:00:02.909) 0:00:10.086 **** 2025-09-18 00:44:53.374928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.374996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 00:44:53.375008 | orchestrator | 2025-09-18 00:44:53.375021 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 00:44:53.375033 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:01.812) 0:00:11.898 **** 2025-09-18 00:44:53.375051 | orchestrator | 2025-09-18 00:44:53.375153 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 00:44:53.375175 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:00.100) 0:00:11.999 **** 2025-09-18 00:44:53.375187 | orchestrator | 2025-09-18 00:44:53.375200 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 00:44:53.375212 | orchestrator | Thursday 18 September 2025 00:44:31 +0000 (0:00:00.105) 0:00:12.104 **** 2025-09-18 00:44:53.375224 | orchestrator | 2025-09-18 00:44:53.375236 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-18 00:44:53.375249 | orchestrator | Thursday 18 September 2025 00:44:31 +0000 (0:00:00.195) 0:00:12.299 **** 2025-09-18 00:44:53.375262 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:53.375275 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:53.375287 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:53.375299 | orchestrator | 2025-09-18 00:44:53.375311 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-18 00:44:53.375323 | orchestrator | Thursday 18 September 2025 00:44:39 +0000 (0:00:08.299) 0:00:20.599 **** 2025-09-18 00:44:53.375335 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:44:53.375347 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:44:53.375359 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:44:53.375371 | orchestrator | 2025-09-18 00:44:53.375382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:44:53.375393 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.375404 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.375415 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:44:53.375426 | orchestrator | 2025-09-18 00:44:53.375437 | orchestrator | 2025-09-18 00:44:53.375448 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:44:53.375459 | orchestrator | Thursday 18 September 2025 00:44:50 +0000 (0:00:10.789) 0:00:31.388 **** 2025-09-18 00:44:53.375470 | orchestrator | =============================================================================== 2025-09-18 00:44:53.375481 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.79s 2025-09-18 00:44:53.375492 | orchestrator | redis : Restart redis container ----------------------------------------- 8.30s 2025-09-18 00:44:53.375503 | orchestrator | redis : Copying over default config.json files -------------------------- 3.27s 2025-09-18 00:44:53.375514 | orchestrator | redis : Copying over redis config files --------------------------------- 2.91s 2025-09-18 00:44:53.375525 | orchestrator | redis : Check redis containers ------------------------------------------ 1.81s 2025-09-18 00:44:53.375535 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.58s 2025-09-18 00:44:53.375567 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2025-09-18 00:44:53.375579 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-18 00:44:53.375590 | orchestrator | redis : include_tasks --------------------------------------------------- 0.42s 2025-09-18 00:44:53.375600 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.40s 2025-09-18 00:44:53.375611 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:53.375623 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:53.375634 | orchestrator | 2025-09-18 00:44:53 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:53.376152 | orchestrator | 2025-09-18 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:56.526837 | orchestrator | 2025-09-18 00:44:56 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:56.526920 | orchestrator | 2025-09-18 00:44:56 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:56.526934 | orchestrator | 2025-09-18 00:44:56 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:56.526946 | orchestrator | 2025-09-18 00:44:56 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:56.526956 | orchestrator | 2025-09-18 00:44:56 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:56.526967 | orchestrator | 2025-09-18 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:44:59.502820 | orchestrator | 2025-09-18 00:44:59 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:44:59.503230 | orchestrator | 2025-09-18 00:44:59 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:44:59.505194 | orchestrator | 2025-09-18 00:44:59 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:44:59.505273 | orchestrator | 2025-09-18 00:44:59 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:44:59.505288 | orchestrator | 2025-09-18 00:44:59 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:44:59.505301 | orchestrator | 2025-09-18 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:02.661862 | orchestrator | 2025-09-18 00:45:02 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:02.661944 | orchestrator | 2025-09-18 00:45:02 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:02.661958 | orchestrator | 2025-09-18 00:45:02 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:45:02.661969 | orchestrator | 2025-09-18 00:45:02 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:02.661980 | orchestrator | 2025-09-18 00:45:02 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:02.661991 | orchestrator | 2025-09-18 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:05.726848 | orchestrator | 2025-09-18 00:45:05 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:05.726942 | orchestrator | 2025-09-18 00:45:05 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:05.728394 | orchestrator | 2025-09-18 00:45:05 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:45:05.728430 | orchestrator | 2025-09-18 00:45:05 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:05.728449 | orchestrator | 2025-09-18 00:45:05 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:05.728461 | orchestrator | 2025-09-18 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:09.205011 | orchestrator | 2025-09-18 00:45:09 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:09.205427 | orchestrator | 2025-09-18 00:45:09 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:09.206004 | orchestrator | 2025-09-18 00:45:09 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state STARTED 2025-09-18 00:45:09.206677 | orchestrator | 2025-09-18 00:45:09 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:09.207286 | orchestrator | 2025-09-18 00:45:09 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:09.207397 | orchestrator | 2025-09-18 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:12.276183 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:12.276269 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:12.276284 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task 99ef0a5a-d326-4f1b-a3ff-a5e2295532ef is in state STARTED 2025-09-18 00:45:12.276296 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task 62b1277f-2c45-41a3-b6d6-66a18b570f73 is in state SUCCESS 2025-09-18 00:45:12.276307 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:12.276318 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task 200fb13b-45f4-4aa7-8430-44a3b75c24c0 is in state STARTED 2025-09-18 00:45:12.276329 | orchestrator | 2025-09-18 00:45:12 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:12.276340 | orchestrator | 2025-09-18 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:12.277220 | orchestrator | 2025-09-18 00:45:12.277375 | orchestrator | 2025-09-18 00:45:12.277389 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-18 00:45:12.277401 | orchestrator | 2025-09-18 00:45:12.277412 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-18 00:45:12.277423 | orchestrator | Thursday 18 September 2025 00:41:49 +0000 (0:00:00.206) 0:00:00.206 **** 2025-09-18 00:45:12.277439 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.277452 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.277462 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.277473 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.277484 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.277495 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.277506 | orchestrator | 2025-09-18 00:45:12.277531 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-18 00:45:12.277628 | orchestrator | Thursday 18 September 2025 00:41:50 +0000 (0:00:00.688) 0:00:00.895 **** 2025-09-18 00:45:12.277639 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.277650 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.277661 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.277672 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.277682 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.277693 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.277704 | orchestrator | 2025-09-18 00:45:12.277715 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-18 00:45:12.277726 | orchestrator | Thursday 18 September 2025 00:41:51 +0000 (0:00:00.610) 0:00:01.506 **** 2025-09-18 00:45:12.277736 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.277747 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.277758 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.277768 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.277779 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.277789 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.277800 | orchestrator | 2025-09-18 00:45:12.277811 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-18 00:45:12.277821 | orchestrator | Thursday 18 September 2025 00:41:51 +0000 (0:00:00.728) 0:00:02.234 **** 2025-09-18 00:45:12.277832 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.277843 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.277853 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.277864 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.277897 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.277908 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.277919 | orchestrator | 2025-09-18 00:45:12.277930 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-18 00:45:12.277941 | orchestrator | Thursday 18 September 2025 00:41:53 +0000 (0:00:01.697) 0:00:03.931 **** 2025-09-18 00:45:12.277952 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.277962 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.277973 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.277983 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.277995 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.278007 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.278060 | orchestrator | 2025-09-18 00:45:12.278081 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-18 00:45:12.278100 | orchestrator | Thursday 18 September 2025 00:41:55 +0000 (0:00:01.529) 0:00:05.461 **** 2025-09-18 00:45:12.278118 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.278136 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.278156 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.278174 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.278194 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.278208 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.278220 | orchestrator | 2025-09-18 00:45:12.278233 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-18 00:45:12.278245 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:01.246) 0:00:06.707 **** 2025-09-18 00:45:12.278257 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.278269 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.278281 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.278294 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.278306 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.278318 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.278330 | orchestrator | 2025-09-18 00:45:12.278342 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-18 00:45:12.278355 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:00.446) 0:00:07.153 **** 2025-09-18 00:45:12.278365 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.278376 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.278387 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.278397 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.278408 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.278418 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.278428 | orchestrator | 2025-09-18 00:45:12.278439 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-18 00:45:12.278450 | orchestrator | Thursday 18 September 2025 00:41:57 +0000 (0:00:00.707) 0:00:07.861 **** 2025-09-18 00:45:12.278460 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278471 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278482 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.278492 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278503 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278514 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.278524 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278598 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278612 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.278623 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278646 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278658 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.278680 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278691 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278702 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.278713 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 00:45:12.278724 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 00:45:12.278742 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.278754 | orchestrator | 2025-09-18 00:45:12.278764 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-18 00:45:12.278775 | orchestrator | Thursday 18 September 2025 00:41:58 +0000 (0:00:00.655) 0:00:08.517 **** 2025-09-18 00:45:12.278786 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.278796 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.278807 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.278818 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.278829 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.278839 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.278850 | orchestrator | 2025-09-18 00:45:12.278861 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-18 00:45:12.278872 | orchestrator | Thursday 18 September 2025 00:41:59 +0000 (0:00:01.191) 0:00:09.708 **** 2025-09-18 00:45:12.278883 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.278894 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.278903 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.278913 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.278923 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.278932 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.278942 | orchestrator | 2025-09-18 00:45:12.278951 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-18 00:45:12.278961 | orchestrator | Thursday 18 September 2025 00:42:00 +0000 (0:00:00.772) 0:00:10.480 **** 2025-09-18 00:45:12.278970 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.278980 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.278990 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.278999 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.279009 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.279018 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.279028 | orchestrator | 2025-09-18 00:45:12.279038 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-18 00:45:12.279047 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:06.073) 0:00:16.554 **** 2025-09-18 00:45:12.279057 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.279067 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.279076 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.279086 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.279095 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.279105 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.279114 | orchestrator | 2025-09-18 00:45:12.279124 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-18 00:45:12.279134 | orchestrator | Thursday 18 September 2025 00:42:07 +0000 (0:00:01.015) 0:00:17.569 **** 2025-09-18 00:45:12.279143 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.279153 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.279162 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.279172 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.279187 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.279203 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.279219 | orchestrator | 2025-09-18 00:45:12.279236 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-18 00:45:12.279254 | orchestrator | Thursday 18 September 2025 00:42:09 +0000 (0:00:02.458) 0:00:20.028 **** 2025-09-18 00:45:12.279282 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.279297 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.279314 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.279335 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.279358 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.279374 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.279389 | orchestrator | 2025-09-18 00:45:12.279405 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-18 00:45:12.279419 | orchestrator | Thursday 18 September 2025 00:42:10 +0000 (0:00:01.133) 0:00:21.161 **** 2025-09-18 00:45:12.279434 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-18 00:45:12.279449 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-18 00:45:12.279464 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-18 00:45:12.279479 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-18 00:45:12.279495 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-18 00:45:12.279510 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-18 00:45:12.279526 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-18 00:45:12.279564 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-18 00:45:12.279580 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-18 00:45:12.279594 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-18 00:45:12.279610 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-18 00:45:12.279626 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-18 00:45:12.279642 | orchestrator | 2025-09-18 00:45:12.279657 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-18 00:45:12.279674 | orchestrator | Thursday 18 September 2025 00:42:12 +0000 (0:00:02.103) 0:00:23.265 **** 2025-09-18 00:45:12.279690 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.279707 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.279723 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.279740 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.279753 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.279763 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.279773 | orchestrator | 2025-09-18 00:45:12.279795 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-18 00:45:12.279806 | orchestrator | 2025-09-18 00:45:12.279815 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-18 00:45:12.279825 | orchestrator | Thursday 18 September 2025 00:42:14 +0000 (0:00:01.820) 0:00:25.086 **** 2025-09-18 00:45:12.279835 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.279844 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.279853 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.279863 | orchestrator | 2025-09-18 00:45:12.279872 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-18 00:45:12.279889 | orchestrator | Thursday 18 September 2025 00:42:15 +0000 (0:00:01.093) 0:00:26.179 **** 2025-09-18 00:45:12.279899 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.279909 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.279918 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.279927 | orchestrator | 2025-09-18 00:45:12.279937 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-18 00:45:12.279946 | orchestrator | Thursday 18 September 2025 00:42:17 +0000 (0:00:01.252) 0:00:27.431 **** 2025-09-18 00:45:12.279956 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.279965 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.279974 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.279984 | orchestrator | 2025-09-18 00:45:12.279993 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-18 00:45:12.280003 | orchestrator | Thursday 18 September 2025 00:42:18 +0000 (0:00:00.865) 0:00:28.297 **** 2025-09-18 00:45:12.280021 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.280030 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.280040 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.280049 | orchestrator | 2025-09-18 00:45:12.280059 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-18 00:45:12.280068 | orchestrator | Thursday 18 September 2025 00:42:19 +0000 (0:00:01.121) 0:00:29.418 **** 2025-09-18 00:45:12.280078 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.280087 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280097 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280106 | orchestrator | 2025-09-18 00:45:12.280116 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-18 00:45:12.280126 | orchestrator | Thursday 18 September 2025 00:42:19 +0000 (0:00:00.382) 0:00:29.801 **** 2025-09-18 00:45:12.280135 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.280145 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.280154 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.280164 | orchestrator | 2025-09-18 00:45:12.280173 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-18 00:45:12.280183 | orchestrator | Thursday 18 September 2025 00:42:20 +0000 (0:00:00.629) 0:00:30.431 **** 2025-09-18 00:45:12.280192 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.280202 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280211 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.280221 | orchestrator | 2025-09-18 00:45:12.280230 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-18 00:45:12.280240 | orchestrator | Thursday 18 September 2025 00:42:21 +0000 (0:00:01.408) 0:00:31.839 **** 2025-09-18 00:45:12.280249 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:45:12.280259 | orchestrator | 2025-09-18 00:45:12.280269 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-18 00:45:12.280278 | orchestrator | Thursday 18 September 2025 00:42:22 +0000 (0:00:00.740) 0:00:32.580 **** 2025-09-18 00:45:12.280288 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.280297 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.280307 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.280316 | orchestrator | 2025-09-18 00:45:12.280325 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-18 00:45:12.280335 | orchestrator | Thursday 18 September 2025 00:42:24 +0000 (0:00:01.877) 0:00:34.457 **** 2025-09-18 00:45:12.280345 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280354 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280364 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280373 | orchestrator | 2025-09-18 00:45:12.280382 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-18 00:45:12.280392 | orchestrator | Thursday 18 September 2025 00:42:24 +0000 (0:00:00.744) 0:00:35.202 **** 2025-09-18 00:45:12.280401 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280411 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280420 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280430 | orchestrator | 2025-09-18 00:45:12.280439 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-18 00:45:12.280449 | orchestrator | Thursday 18 September 2025 00:42:25 +0000 (0:00:00.945) 0:00:36.147 **** 2025-09-18 00:45:12.280458 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280468 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280477 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280487 | orchestrator | 2025-09-18 00:45:12.280496 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-18 00:45:12.280506 | orchestrator | Thursday 18 September 2025 00:42:27 +0000 (0:00:01.431) 0:00:37.578 **** 2025-09-18 00:45:12.280515 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.280525 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280552 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280568 | orchestrator | 2025-09-18 00:45:12.280578 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-18 00:45:12.280588 | orchestrator | Thursday 18 September 2025 00:42:27 +0000 (0:00:00.433) 0:00:38.012 **** 2025-09-18 00:45:12.280597 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.280607 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280616 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280626 | orchestrator | 2025-09-18 00:45:12.280635 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-18 00:45:12.280645 | orchestrator | Thursday 18 September 2025 00:42:28 +0000 (0:00:00.348) 0:00:38.361 **** 2025-09-18 00:45:12.280654 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280664 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.280673 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.280683 | orchestrator | 2025-09-18 00:45:12.280699 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-18 00:45:12.280709 | orchestrator | Thursday 18 September 2025 00:42:30 +0000 (0:00:02.113) 0:00:40.474 **** 2025-09-18 00:45:12.280719 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 00:45:12.280734 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 00:45:12.280744 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 00:45:12.280754 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 00:45:12.280763 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 00:45:12.280773 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 00:45:12.280783 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 00:45:12.280792 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 00:45:12.280801 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 00:45:12.280811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 00:45:12.280820 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 00:45:12.280830 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 00:45:12.280839 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.280849 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.280859 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.280868 | orchestrator | 2025-09-18 00:45:12.280878 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-18 00:45:12.280887 | orchestrator | Thursday 18 September 2025 00:43:15 +0000 (0:00:45.377) 0:01:25.852 **** 2025-09-18 00:45:12.280897 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.280907 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.280916 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.280925 | orchestrator | 2025-09-18 00:45:12.280935 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-18 00:45:12.280944 | orchestrator | Thursday 18 September 2025 00:43:15 +0000 (0:00:00.290) 0:01:26.143 **** 2025-09-18 00:45:12.280959 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.280969 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.280979 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.280988 | orchestrator | 2025-09-18 00:45:12.280998 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-18 00:45:12.281007 | orchestrator | Thursday 18 September 2025 00:43:16 +0000 (0:00:01.086) 0:01:27.229 **** 2025-09-18 00:45:12.281017 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281026 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281036 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281045 | orchestrator | 2025-09-18 00:45:12.281055 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-18 00:45:12.281064 | orchestrator | Thursday 18 September 2025 00:43:18 +0000 (0:00:01.421) 0:01:28.651 **** 2025-09-18 00:45:12.281074 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281083 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281093 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281102 | orchestrator | 2025-09-18 00:45:12.281111 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-18 00:45:12.281121 | orchestrator | Thursday 18 September 2025 00:43:42 +0000 (0:00:24.414) 0:01:53.066 **** 2025-09-18 00:45:12.281130 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281140 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281149 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281158 | orchestrator | 2025-09-18 00:45:12.281168 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-18 00:45:12.281177 | orchestrator | Thursday 18 September 2025 00:43:43 +0000 (0:00:00.726) 0:01:53.793 **** 2025-09-18 00:45:12.281187 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281196 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281206 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281215 | orchestrator | 2025-09-18 00:45:12.281224 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-18 00:45:12.281234 | orchestrator | Thursday 18 September 2025 00:43:44 +0000 (0:00:00.766) 0:01:54.559 **** 2025-09-18 00:45:12.281243 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281253 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281262 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281272 | orchestrator | 2025-09-18 00:45:12.281281 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-18 00:45:12.281291 | orchestrator | Thursday 18 September 2025 00:43:44 +0000 (0:00:00.650) 0:01:55.210 **** 2025-09-18 00:45:12.281301 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281315 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281325 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281334 | orchestrator | 2025-09-18 00:45:12.281344 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-18 00:45:12.281354 | orchestrator | Thursday 18 September 2025 00:43:45 +0000 (0:00:00.946) 0:01:56.157 **** 2025-09-18 00:45:12.281363 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281373 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281382 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281391 | orchestrator | 2025-09-18 00:45:12.281401 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-18 00:45:12.281414 | orchestrator | Thursday 18 September 2025 00:43:46 +0000 (0:00:00.328) 0:01:56.485 **** 2025-09-18 00:45:12.281424 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281434 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281443 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281453 | orchestrator | 2025-09-18 00:45:12.281462 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-18 00:45:12.281472 | orchestrator | Thursday 18 September 2025 00:43:46 +0000 (0:00:00.572) 0:01:57.057 **** 2025-09-18 00:45:12.281481 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281496 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281505 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281515 | orchestrator | 2025-09-18 00:45:12.281524 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-18 00:45:12.281546 | orchestrator | Thursday 18 September 2025 00:43:47 +0000 (0:00:00.551) 0:01:57.608 **** 2025-09-18 00:45:12.281556 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281566 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281575 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281585 | orchestrator | 2025-09-18 00:45:12.281594 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-18 00:45:12.281604 | orchestrator | Thursday 18 September 2025 00:43:48 +0000 (0:00:01.046) 0:01:58.654 **** 2025-09-18 00:45:12.281613 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:12.281623 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:12.281632 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:12.281641 | orchestrator | 2025-09-18 00:45:12.281651 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-18 00:45:12.281660 | orchestrator | Thursday 18 September 2025 00:43:49 +0000 (0:00:00.723) 0:01:59.377 **** 2025-09-18 00:45:12.281670 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.281679 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.281689 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.281698 | orchestrator | 2025-09-18 00:45:12.281707 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-18 00:45:12.281717 | orchestrator | Thursday 18 September 2025 00:43:49 +0000 (0:00:00.290) 0:01:59.668 **** 2025-09-18 00:45:12.281726 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.281736 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.281745 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.281755 | orchestrator | 2025-09-18 00:45:12.281764 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-18 00:45:12.281774 | orchestrator | Thursday 18 September 2025 00:43:49 +0000 (0:00:00.282) 0:01:59.950 **** 2025-09-18 00:45:12.281783 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281793 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281802 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281812 | orchestrator | 2025-09-18 00:45:12.281821 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-18 00:45:12.281831 | orchestrator | Thursday 18 September 2025 00:43:50 +0000 (0:00:00.801) 0:02:00.752 **** 2025-09-18 00:45:12.281840 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.281850 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.281859 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.281868 | orchestrator | 2025-09-18 00:45:12.281878 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-18 00:45:12.281888 | orchestrator | Thursday 18 September 2025 00:43:51 +0000 (0:00:00.588) 0:02:01.340 **** 2025-09-18 00:45:12.281897 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 00:45:12.281906 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 00:45:12.281916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 00:45:12.281926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 00:45:12.281935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 00:45:12.281945 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 00:45:12.281954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 00:45:12.281963 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 00:45:12.281978 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 00:45:12.281988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-18 00:45:12.281997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 00:45:12.282007 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 00:45:12.282043 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-18 00:45:12.282061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 00:45:12.282071 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 00:45:12.282080 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 00:45:12.282090 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 00:45:12.282100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 00:45:12.282113 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 00:45:12.282123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 00:45:12.282133 | orchestrator | 2025-09-18 00:45:12.282142 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-18 00:45:12.282152 | orchestrator | 2025-09-18 00:45:12.282162 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-18 00:45:12.282171 | orchestrator | Thursday 18 September 2025 00:43:53 +0000 (0:00:02.887) 0:02:04.228 **** 2025-09-18 00:45:12.282181 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.282190 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.282200 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.282209 | orchestrator | 2025-09-18 00:45:12.282219 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-18 00:45:12.282228 | orchestrator | Thursday 18 September 2025 00:43:54 +0000 (0:00:00.505) 0:02:04.733 **** 2025-09-18 00:45:12.282238 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.282247 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.282257 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.282266 | orchestrator | 2025-09-18 00:45:12.282275 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-18 00:45:12.282285 | orchestrator | Thursday 18 September 2025 00:43:55 +0000 (0:00:00.594) 0:02:05.328 **** 2025-09-18 00:45:12.282294 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.282304 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.282313 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.282323 | orchestrator | 2025-09-18 00:45:12.282332 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-18 00:45:12.282342 | orchestrator | Thursday 18 September 2025 00:43:55 +0000 (0:00:00.330) 0:02:05.658 **** 2025-09-18 00:45:12.282351 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:45:12.282361 | orchestrator | 2025-09-18 00:45:12.282371 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-18 00:45:12.282380 | orchestrator | Thursday 18 September 2025 00:43:56 +0000 (0:00:00.707) 0:02:06.366 **** 2025-09-18 00:45:12.282390 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.282399 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.282409 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.282418 | orchestrator | 2025-09-18 00:45:12.282428 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-18 00:45:12.282437 | orchestrator | Thursday 18 September 2025 00:43:56 +0000 (0:00:00.338) 0:02:06.704 **** 2025-09-18 00:45:12.282457 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.282467 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.282476 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.282486 | orchestrator | 2025-09-18 00:45:12.282495 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-18 00:45:12.282505 | orchestrator | Thursday 18 September 2025 00:43:56 +0000 (0:00:00.309) 0:02:07.014 **** 2025-09-18 00:45:12.282514 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.282523 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.282554 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.282564 | orchestrator | 2025-09-18 00:45:12.282574 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-18 00:45:12.282583 | orchestrator | Thursday 18 September 2025 00:43:57 +0000 (0:00:00.355) 0:02:07.370 **** 2025-09-18 00:45:12.282593 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.282602 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.282612 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.282621 | orchestrator | 2025-09-18 00:45:12.282630 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-18 00:45:12.282640 | orchestrator | Thursday 18 September 2025 00:43:57 +0000 (0:00:00.844) 0:02:08.214 **** 2025-09-18 00:45:12.282650 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.282659 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.282669 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.282678 | orchestrator | 2025-09-18 00:45:12.282688 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-18 00:45:12.282697 | orchestrator | Thursday 18 September 2025 00:43:58 +0000 (0:00:01.058) 0:02:09.273 **** 2025-09-18 00:45:12.282707 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.282716 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.282726 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.282735 | orchestrator | 2025-09-18 00:45:12.282745 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-18 00:45:12.282754 | orchestrator | Thursday 18 September 2025 00:44:00 +0000 (0:00:01.222) 0:02:10.495 **** 2025-09-18 00:45:12.282764 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:12.282773 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:12.282783 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:12.282792 | orchestrator | 2025-09-18 00:45:12.282802 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-18 00:45:12.282811 | orchestrator | 2025-09-18 00:45:12.282821 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-18 00:45:12.282830 | orchestrator | Thursday 18 September 2025 00:44:12 +0000 (0:00:12.100) 0:02:22.596 **** 2025-09-18 00:45:12.282840 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.282849 | orchestrator | 2025-09-18 00:45:12.282859 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-18 00:45:12.282868 | orchestrator | Thursday 18 September 2025 00:44:13 +0000 (0:00:00.770) 0:02:23.367 **** 2025-09-18 00:45:12.282883 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.282893 | orchestrator | 2025-09-18 00:45:12.282902 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 00:45:12.282912 | orchestrator | Thursday 18 September 2025 00:44:13 +0000 (0:00:00.750) 0:02:24.117 **** 2025-09-18 00:45:12.282921 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 00:45:12.282931 | orchestrator | 2025-09-18 00:45:12.282940 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 00:45:12.282950 | orchestrator | Thursday 18 September 2025 00:44:14 +0000 (0:00:00.633) 0:02:24.750 **** 2025-09-18 00:45:12.282964 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.282974 | orchestrator | 2025-09-18 00:45:12.282983 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-18 00:45:12.282993 | orchestrator | Thursday 18 September 2025 00:44:15 +0000 (0:00:00.851) 0:02:25.601 **** 2025-09-18 00:45:12.283009 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283019 | orchestrator | 2025-09-18 00:45:12.283028 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-18 00:45:12.283038 | orchestrator | Thursday 18 September 2025 00:44:15 +0000 (0:00:00.458) 0:02:26.059 **** 2025-09-18 00:45:12.283048 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 00:45:12.283057 | orchestrator | 2025-09-18 00:45:12.283067 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-18 00:45:12.283076 | orchestrator | Thursday 18 September 2025 00:44:17 +0000 (0:00:01.374) 0:02:27.434 **** 2025-09-18 00:45:12.283086 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 00:45:12.283095 | orchestrator | 2025-09-18 00:45:12.283105 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-18 00:45:12.283114 | orchestrator | Thursday 18 September 2025 00:44:17 +0000 (0:00:00.674) 0:02:28.108 **** 2025-09-18 00:45:12.283124 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283133 | orchestrator | 2025-09-18 00:45:12.283143 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-18 00:45:12.283152 | orchestrator | Thursday 18 September 2025 00:44:18 +0000 (0:00:00.353) 0:02:28.461 **** 2025-09-18 00:45:12.283162 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283171 | orchestrator | 2025-09-18 00:45:12.283181 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-18 00:45:12.283190 | orchestrator | 2025-09-18 00:45:12.283200 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-18 00:45:12.283209 | orchestrator | Thursday 18 September 2025 00:44:18 +0000 (0:00:00.571) 0:02:29.033 **** 2025-09-18 00:45:12.283219 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.283228 | orchestrator | 2025-09-18 00:45:12.283238 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-18 00:45:12.283247 | orchestrator | Thursday 18 September 2025 00:44:18 +0000 (0:00:00.124) 0:02:29.158 **** 2025-09-18 00:45:12.283257 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:45:12.283266 | orchestrator | 2025-09-18 00:45:12.283276 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-18 00:45:12.283285 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:00.190) 0:02:29.348 **** 2025-09-18 00:45:12.283295 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.283304 | orchestrator | 2025-09-18 00:45:12.283314 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-18 00:45:12.283323 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:00.609) 0:02:29.958 **** 2025-09-18 00:45:12.283333 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.283342 | orchestrator | 2025-09-18 00:45:12.283352 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-18 00:45:12.283361 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:01.138) 0:02:31.097 **** 2025-09-18 00:45:12.283371 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283380 | orchestrator | 2025-09-18 00:45:12.283390 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-18 00:45:12.283399 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:00.862) 0:02:31.960 **** 2025-09-18 00:45:12.283408 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.283418 | orchestrator | 2025-09-18 00:45:12.283427 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-18 00:45:12.283437 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:00.379) 0:02:32.339 **** 2025-09-18 00:45:12.283446 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283456 | orchestrator | 2025-09-18 00:45:12.283465 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-18 00:45:12.283475 | orchestrator | Thursday 18 September 2025 00:44:28 +0000 (0:00:05.950) 0:02:38.290 **** 2025-09-18 00:45:12.283484 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.283499 | orchestrator | 2025-09-18 00:45:12.283509 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-18 00:45:12.283518 | orchestrator | Thursday 18 September 2025 00:44:40 +0000 (0:00:12.871) 0:02:51.162 **** 2025-09-18 00:45:12.283528 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.283550 | orchestrator | 2025-09-18 00:45:12.283560 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-18 00:45:12.283570 | orchestrator | 2025-09-18 00:45:12.283579 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-18 00:45:12.283589 | orchestrator | Thursday 18 September 2025 00:44:41 +0000 (0:00:00.557) 0:02:51.719 **** 2025-09-18 00:45:12.283598 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.283608 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.283617 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.283627 | orchestrator | 2025-09-18 00:45:12.283636 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-18 00:45:12.283645 | orchestrator | Thursday 18 September 2025 00:44:42 +0000 (0:00:00.579) 0:02:52.299 **** 2025-09-18 00:45:12.283655 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283664 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.283674 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.283683 | orchestrator | 2025-09-18 00:45:12.283698 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-18 00:45:12.283708 | orchestrator | Thursday 18 September 2025 00:44:42 +0000 (0:00:00.328) 0:02:52.627 **** 2025-09-18 00:45:12.283718 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:45:12.283727 | orchestrator | 2025-09-18 00:45:12.283737 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-18 00:45:12.283746 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.796) 0:02:53.424 **** 2025-09-18 00:45:12.283756 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283765 | orchestrator | 2025-09-18 00:45:12.283779 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-18 00:45:12.283788 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.227) 0:02:53.651 **** 2025-09-18 00:45:12.283798 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283808 | orchestrator | 2025-09-18 00:45:12.283817 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-18 00:45:12.283827 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.216) 0:02:53.867 **** 2025-09-18 00:45:12.283836 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283846 | orchestrator | 2025-09-18 00:45:12.283855 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-18 00:45:12.283865 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.198) 0:02:54.066 **** 2025-09-18 00:45:12.283874 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283884 | orchestrator | 2025-09-18 00:45:12.283893 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-18 00:45:12.283903 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.178) 0:02:54.244 **** 2025-09-18 00:45:12.283912 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283922 | orchestrator | 2025-09-18 00:45:12.283931 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-18 00:45:12.283941 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.180) 0:02:54.424 **** 2025-09-18 00:45:12.283950 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283960 | orchestrator | 2025-09-18 00:45:12.283969 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-18 00:45:12.283979 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.211) 0:02:54.636 **** 2025-09-18 00:45:12.283988 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.283998 | orchestrator | 2025-09-18 00:45:12.284007 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-18 00:45:12.284023 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.215) 0:02:54.851 **** 2025-09-18 00:45:12.284033 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284042 | orchestrator | 2025-09-18 00:45:12.284052 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-18 00:45:12.284061 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.194) 0:02:55.046 **** 2025-09-18 00:45:12.284071 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284080 | orchestrator | 2025-09-18 00:45:12.284089 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-18 00:45:12.284099 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.198) 0:02:55.244 **** 2025-09-18 00:45:12.284109 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-18 00:45:12.284118 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-18 00:45:12.284128 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284137 | orchestrator | 2025-09-18 00:45:12.284147 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-18 00:45:12.284156 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:00.578) 0:02:55.823 **** 2025-09-18 00:45:12.284166 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284175 | orchestrator | 2025-09-18 00:45:12.284185 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-18 00:45:12.284194 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:00.330) 0:02:56.154 **** 2025-09-18 00:45:12.284204 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284213 | orchestrator | 2025-09-18 00:45:12.284223 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-18 00:45:12.284232 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.242) 0:02:56.397 **** 2025-09-18 00:45:12.284242 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284251 | orchestrator | 2025-09-18 00:45:12.284261 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-18 00:45:12.284270 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.205) 0:02:56.602 **** 2025-09-18 00:45:12.284280 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284289 | orchestrator | 2025-09-18 00:45:12.284299 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-18 00:45:12.284308 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.214) 0:02:56.816 **** 2025-09-18 00:45:12.284318 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284327 | orchestrator | 2025-09-18 00:45:12.284337 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-18 00:45:12.284346 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.187) 0:02:57.004 **** 2025-09-18 00:45:12.284356 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284365 | orchestrator | 2025-09-18 00:45:12.284375 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-18 00:45:12.284384 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.220) 0:02:57.225 **** 2025-09-18 00:45:12.284394 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284403 | orchestrator | 2025-09-18 00:45:12.284413 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-18 00:45:12.284422 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.201) 0:02:57.426 **** 2025-09-18 00:45:12.284432 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284441 | orchestrator | 2025-09-18 00:45:12.284451 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-18 00:45:12.284465 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.247) 0:02:57.673 **** 2025-09-18 00:45:12.284475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284485 | orchestrator | 2025-09-18 00:45:12.284494 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-18 00:45:12.284504 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.172) 0:02:57.846 **** 2025-09-18 00:45:12.284519 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284529 | orchestrator | 2025-09-18 00:45:12.284575 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-18 00:45:12.284591 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.168) 0:02:58.015 **** 2025-09-18 00:45:12.284601 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284618 | orchestrator | 2025-09-18 00:45:12.284635 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-18 00:45:12.284650 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.180) 0:02:58.195 **** 2025-09-18 00:45:12.284666 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-18 00:45:12.284682 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-18 00:45:12.284698 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-18 00:45:12.284715 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-18 00:45:12.284733 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284749 | orchestrator | 2025-09-18 00:45:12.284763 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-18 00:45:12.284773 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.773) 0:02:58.968 **** 2025-09-18 00:45:12.284783 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284792 | orchestrator | 2025-09-18 00:45:12.284802 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-18 00:45:12.284812 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.185) 0:02:59.154 **** 2025-09-18 00:45:12.284821 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284831 | orchestrator | 2025-09-18 00:45:12.284840 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-18 00:45:12.284850 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:00.232) 0:02:59.387 **** 2025-09-18 00:45:12.284859 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284869 | orchestrator | 2025-09-18 00:45:12.284877 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-18 00:45:12.284884 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:00.224) 0:02:59.611 **** 2025-09-18 00:45:12.284892 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284900 | orchestrator | 2025-09-18 00:45:12.284908 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-18 00:45:12.284916 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:00.212) 0:02:59.823 **** 2025-09-18 00:45:12.284924 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-18 00:45:12.284931 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-18 00:45:12.284939 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284947 | orchestrator | 2025-09-18 00:45:12.284955 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-18 00:45:12.284963 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:00.312) 0:03:00.136 **** 2025-09-18 00:45:12.284971 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.284978 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.284986 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.284994 | orchestrator | 2025-09-18 00:45:12.285002 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-18 00:45:12.285009 | orchestrator | Thursday 18 September 2025 00:44:50 +0000 (0:00:00.516) 0:03:00.652 **** 2025-09-18 00:45:12.285017 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.285025 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.285033 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.285041 | orchestrator | 2025-09-18 00:45:12.285048 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-18 00:45:12.285056 | orchestrator | 2025-09-18 00:45:12.285064 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-18 00:45:12.285081 | orchestrator | Thursday 18 September 2025 00:44:51 +0000 (0:00:01.117) 0:03:01.770 **** 2025-09-18 00:45:12.285089 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:12.285097 | orchestrator | 2025-09-18 00:45:12.285105 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-18 00:45:12.285113 | orchestrator | Thursday 18 September 2025 00:44:51 +0000 (0:00:00.165) 0:03:01.936 **** 2025-09-18 00:45:12.285120 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 00:45:12.285128 | orchestrator | 2025-09-18 00:45:12.285136 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-18 00:45:12.285144 | orchestrator | Thursday 18 September 2025 00:44:51 +0000 (0:00:00.238) 0:03:02.174 **** 2025-09-18 00:45:12.285151 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:12.285159 | orchestrator | 2025-09-18 00:45:12.285167 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-18 00:45:12.285175 | orchestrator | 2025-09-18 00:45:12.285183 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-18 00:45:12.285191 | orchestrator | Thursday 18 September 2025 00:44:56 +0000 (0:00:04.985) 0:03:07.160 **** 2025-09-18 00:45:12.285198 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:12.285206 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:12.285214 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:12.285222 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:12.285229 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:12.285237 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:12.285245 | orchestrator | 2025-09-18 00:45:12.285252 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-18 00:45:12.285260 | orchestrator | Thursday 18 September 2025 00:44:57 +0000 (0:00:00.858) 0:03:08.018 **** 2025-09-18 00:45:12.285274 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 00:45:12.285283 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 00:45:12.285290 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 00:45:12.285298 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 00:45:12.285306 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 00:45:12.285319 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 00:45:12.285327 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 00:45:12.285335 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 00:45:12.285342 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 00:45:12.285350 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 00:45:12.285358 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 00:45:12.285366 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 00:45:12.285373 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 00:45:12.285381 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 00:45:12.285389 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 00:45:12.285397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 00:45:12.285405 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 00:45:12.285412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 00:45:12.285425 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 00:45:12.285433 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 00:45:12.285441 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 00:45:12.285448 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 00:45:12.285456 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 00:45:12.285464 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 00:45:12.285472 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 00:45:12.285479 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 00:45:12.285487 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 00:45:12.285495 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 00:45:12.285503 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 00:45:12.285510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 00:45:12.285518 | orchestrator | 2025-09-18 00:45:12.285526 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-18 00:45:12.285571 | orchestrator | Thursday 18 September 2025 00:45:09 +0000 (0:00:12.174) 0:03:20.193 **** 2025-09-18 00:45:12.285581 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.285589 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.285597 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.285603 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.285610 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.285617 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.285623 | orchestrator | 2025-09-18 00:45:12.285630 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-18 00:45:12.285637 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:00.616) 0:03:20.809 **** 2025-09-18 00:45:12.285643 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:12.285650 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:12.285656 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:12.285663 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:12.285670 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:12.285676 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:12.285683 | orchestrator | 2025-09-18 00:45:12.285689 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:45:12.285696 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:45:12.285704 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-18 00:45:12.285711 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 00:45:12.285722 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 00:45:12.285730 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 00:45:12.285736 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 00:45:12.285743 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 00:45:12.285755 | orchestrator | 2025-09-18 00:45:12.285761 | orchestrator | 2025-09-18 00:45:12.285768 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:45:12.285775 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:00.339) 0:03:21.149 **** 2025-09-18 00:45:12.285781 | orchestrator | =============================================================================== 2025-09-18 00:45:12.285788 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.38s 2025-09-18 00:45:12.285794 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.41s 2025-09-18 00:45:12.285801 | orchestrator | kubectl : Install required packages ------------------------------------ 12.87s 2025-09-18 00:45:12.285808 | orchestrator | Manage labels ---------------------------------------------------------- 12.17s 2025-09-18 00:45:12.285820 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.10s 2025-09-18 00:45:12.285827 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.07s 2025-09-18 00:45:12.285834 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.95s 2025-09-18 00:45:12.285840 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.99s 2025-09-18 00:45:12.285847 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.89s 2025-09-18 00:45:12.285853 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.46s 2025-09-18 00:45:12.285860 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.11s 2025-09-18 00:45:12.285867 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.11s 2025-09-18 00:45:12.285873 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.88s 2025-09-18 00:45:12.285880 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.82s 2025-09-18 00:45:12.285887 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.70s 2025-09-18 00:45:12.285893 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.53s 2025-09-18 00:45:12.285900 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.43s 2025-09-18 00:45:12.285906 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.42s 2025-09-18 00:45:12.285913 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.41s 2025-09-18 00:45:12.285919 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.37s 2025-09-18 00:45:15.341267 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:15.342866 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:15.343654 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task 99ef0a5a-d326-4f1b-a3ff-a5e2295532ef is in state STARTED 2025-09-18 00:45:15.344602 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:15.346606 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task 200fb13b-45f4-4aa7-8430-44a3b75c24c0 is in state STARTED 2025-09-18 00:45:15.348925 | orchestrator | 2025-09-18 00:45:15 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:15.348951 | orchestrator | 2025-09-18 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:18.405845 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:18.407624 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:18.409574 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task 99ef0a5a-d326-4f1b-a3ff-a5e2295532ef is in state STARTED 2025-09-18 00:45:18.411421 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:18.412564 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task 200fb13b-45f4-4aa7-8430-44a3b75c24c0 is in state SUCCESS 2025-09-18 00:45:18.414059 | orchestrator | 2025-09-18 00:45:18 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:18.414364 | orchestrator | 2025-09-18 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:21.451400 | orchestrator | 2025-09-18 00:45:21 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:21.454126 | orchestrator | 2025-09-18 00:45:21 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:21.454351 | orchestrator | 2025-09-18 00:45:21 | INFO  | Task 99ef0a5a-d326-4f1b-a3ff-a5e2295532ef is in state SUCCESS 2025-09-18 00:45:21.454992 | orchestrator | 2025-09-18 00:45:21 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state STARTED 2025-09-18 00:45:21.455498 | orchestrator | 2025-09-18 00:45:21 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:21.455659 | orchestrator | 2025-09-18 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:24.494005 | orchestrator | 2025-09-18 00:45:24 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:24.495252 | orchestrator | 2025-09-18 00:45:24 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:24.498070 | orchestrator | 2025-09-18 00:45:24 | INFO  | Task 354a8cdf-db92-47b3-be28-759ba5d68741 is in state SUCCESS 2025-09-18 00:45:24.499075 | orchestrator | 2025-09-18 00:45:24.499110 | orchestrator | 2025-09-18 00:45:24.499123 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-18 00:45:24.499135 | orchestrator | 2025-09-18 00:45:24.499365 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 00:45:24.499388 | orchestrator | Thursday 18 September 2025 00:45:14 +0000 (0:00:00.222) 0:00:00.223 **** 2025-09-18 00:45:24.499409 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 00:45:24.499428 | orchestrator | 2025-09-18 00:45:24.499447 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 00:45:24.499465 | orchestrator | Thursday 18 September 2025 00:45:15 +0000 (0:00:00.944) 0:00:01.167 **** 2025-09-18 00:45:24.499481 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:24.499493 | orchestrator | 2025-09-18 00:45:24.499505 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-18 00:45:24.499515 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:01.321) 0:00:02.489 **** 2025-09-18 00:45:24.499553 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:24.499564 | orchestrator | 2025-09-18 00:45:24.499575 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:45:24.499586 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:45:24.499599 | orchestrator | 2025-09-18 00:45:24.499610 | orchestrator | 2025-09-18 00:45:24.499621 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:45:24.499632 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:00.453) 0:00:02.942 **** 2025-09-18 00:45:24.499643 | orchestrator | =============================================================================== 2025-09-18 00:45:24.499654 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2025-09-18 00:45:24.499665 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2025-09-18 00:45:24.499700 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.45s 2025-09-18 00:45:24.499711 | orchestrator | 2025-09-18 00:45:24.499722 | orchestrator | 2025-09-18 00:45:24.499733 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-18 00:45:24.499743 | orchestrator | 2025-09-18 00:45:24.499754 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-18 00:45:24.499764 | orchestrator | Thursday 18 September 2025 00:45:14 +0000 (0:00:00.163) 0:00:00.163 **** 2025-09-18 00:45:24.499775 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:24.499787 | orchestrator | 2025-09-18 00:45:24.499797 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-18 00:45:24.499808 | orchestrator | Thursday 18 September 2025 00:45:15 +0000 (0:00:00.546) 0:00:00.710 **** 2025-09-18 00:45:24.499819 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:24.499830 | orchestrator | 2025-09-18 00:45:24.499841 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 00:45:24.499851 | orchestrator | Thursday 18 September 2025 00:45:15 +0000 (0:00:00.600) 0:00:01.310 **** 2025-09-18 00:45:24.499862 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 00:45:24.499873 | orchestrator | 2025-09-18 00:45:24.499884 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 00:45:24.499894 | orchestrator | Thursday 18 September 2025 00:45:16 +0000 (0:00:00.799) 0:00:02.110 **** 2025-09-18 00:45:24.499905 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:24.499915 | orchestrator | 2025-09-18 00:45:24.499926 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-18 00:45:24.499937 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:01.098) 0:00:03.209 **** 2025-09-18 00:45:24.499948 | orchestrator | changed: [testbed-manager] 2025-09-18 00:45:24.499958 | orchestrator | 2025-09-18 00:45:24.499969 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-18 00:45:24.499980 | orchestrator | Thursday 18 September 2025 00:45:18 +0000 (0:00:00.623) 0:00:03.833 **** 2025-09-18 00:45:24.499991 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 00:45:24.500002 | orchestrator | 2025-09-18 00:45:24.500013 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-18 00:45:24.500023 | orchestrator | Thursday 18 September 2025 00:45:19 +0000 (0:00:01.198) 0:00:05.032 **** 2025-09-18 00:45:24.500034 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 00:45:24.500045 | orchestrator | 2025-09-18 00:45:24.500055 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-18 00:45:24.500066 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.635) 0:00:05.667 **** 2025-09-18 00:45:24.500077 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:24.500088 | orchestrator | 2025-09-18 00:45:24.500098 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-18 00:45:24.500109 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.374) 0:00:06.041 **** 2025-09-18 00:45:24.500120 | orchestrator | ok: [testbed-manager] 2025-09-18 00:45:24.500131 | orchestrator | 2025-09-18 00:45:24.500153 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:45:24.500165 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:45:24.500176 | orchestrator | 2025-09-18 00:45:24.500187 | orchestrator | 2025-09-18 00:45:24.500198 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:45:24.500208 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.261) 0:00:06.303 **** 2025-09-18 00:45:24.500219 | orchestrator | =============================================================================== 2025-09-18 00:45:24.500230 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.20s 2025-09-18 00:45:24.500240 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2025-09-18 00:45:24.500259 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-09-18 00:45:24.500282 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.64s 2025-09-18 00:45:24.500293 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2025-09-18 00:45:24.500304 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2025-09-18 00:45:24.500314 | orchestrator | Get home directory of operator user ------------------------------------- 0.55s 2025-09-18 00:45:24.500325 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-09-18 00:45:24.500335 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-18 00:45:24.500346 | orchestrator | 2025-09-18 00:45:24.500357 | orchestrator | 2025-09-18 00:45:24.500368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:45:24.500378 | orchestrator | 2025-09-18 00:45:24.500389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:45:24.500399 | orchestrator | Thursday 18 September 2025 00:44:17 +0000 (0:00:00.407) 0:00:00.407 **** 2025-09-18 00:45:24.500410 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:24.500421 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:24.500431 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:24.500442 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:24.500453 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:24.500463 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:24.500474 | orchestrator | 2025-09-18 00:45:24.500485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:45:24.500495 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:01.415) 0:00:01.822 **** 2025-09-18 00:45:24.500506 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500517 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500545 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500556 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500567 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500578 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 00:45:24.500588 | orchestrator | 2025-09-18 00:45:24.500599 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-18 00:45:24.500610 | orchestrator | 2025-09-18 00:45:24.500620 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-18 00:45:24.500631 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.807) 0:00:02.630 **** 2025-09-18 00:45:24.500643 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:45:24.500654 | orchestrator | 2025-09-18 00:45:24.500665 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 00:45:24.500676 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:01.411) 0:00:04.041 **** 2025-09-18 00:45:24.500687 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-18 00:45:24.500697 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-18 00:45:24.500708 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-18 00:45:24.500719 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-18 00:45:24.500730 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-18 00:45:24.500740 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-18 00:45:24.500751 | orchestrator | 2025-09-18 00:45:24.500762 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 00:45:24.500772 | orchestrator | Thursday 18 September 2025 00:44:23 +0000 (0:00:01.963) 0:00:06.005 **** 2025-09-18 00:45:24.500790 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-18 00:45:24.500800 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-18 00:45:24.500811 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-18 00:45:24.500822 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-18 00:45:24.500832 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-18 00:45:24.500843 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-18 00:45:24.500853 | orchestrator | 2025-09-18 00:45:24.500864 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 00:45:24.500874 | orchestrator | Thursday 18 September 2025 00:44:25 +0000 (0:00:02.233) 0:00:08.238 **** 2025-09-18 00:45:24.500885 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-18 00:45:24.500896 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:24.500906 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-18 00:45:24.500917 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:24.500932 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-18 00:45:24.500943 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:24.500954 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-18 00:45:24.500965 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:24.500975 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-18 00:45:24.500986 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:24.500996 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-18 00:45:24.501007 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:24.501018 | orchestrator | 2025-09-18 00:45:24.501029 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-18 00:45:24.501039 | orchestrator | Thursday 18 September 2025 00:44:27 +0000 (0:00:01.895) 0:00:10.134 **** 2025-09-18 00:45:24.501050 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:24.501061 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:24.501071 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:24.501090 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:24.501101 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:24.501112 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:24.501123 | orchestrator | 2025-09-18 00:45:24.501133 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-18 00:45:24.501144 | orchestrator | Thursday 18 September 2025 00:44:28 +0000 (0:00:00.649) 0:00:10.783 **** 2025-09-18 00:45:24.501157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501357 | orchestrator | 2025-09-18 00:45:24.501368 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-18 00:45:24.501380 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:02.051) 0:00:12.835 **** 2025-09-18 00:45:24.501391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501578 | orchestrator | 2025-09-18 00:45:24.501589 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-18 00:45:24.501606 | orchestrator | Thursday 18 September 2025 00:44:33 +0000 (0:00:03.535) 0:00:16.370 **** 2025-09-18 00:45:24.501618 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:45:24.501629 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:45:24.501639 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:45:24.501650 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:24.501661 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:24.501672 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:24.501682 | orchestrator | 2025-09-18 00:45:24.501693 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-18 00:45:24.501704 | orchestrator | Thursday 18 September 2025 00:44:35 +0000 (0:00:01.253) 0:00:17.624 **** 2025-09-18 00:45:24.501715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 00:45:24.501892 | orchestrator | 2025-09-18 00:45:24.501903 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.501914 | orchestrator | Thursday 18 September 2025 00:44:37 +0000 (0:00:02.572) 0:00:20.197 **** 2025-09-18 00:45:24.501925 | orchestrator | 2025-09-18 00:45:24.501935 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.501946 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.295) 0:00:20.493 **** 2025-09-18 00:45:24.501957 | orchestrator | 2025-09-18 00:45:24.501968 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.501979 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.190) 0:00:20.683 **** 2025-09-18 00:45:24.501989 | orchestrator | 2025-09-18 00:45:24.502000 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.502011 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.166) 0:00:20.849 **** 2025-09-18 00:45:24.502068 | orchestrator | 2025-09-18 00:45:24.502080 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.502091 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.131) 0:00:20.981 **** 2025-09-18 00:45:24.502102 | orchestrator | 2025-09-18 00:45:24.502113 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 00:45:24.502123 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.126) 0:00:21.107 **** 2025-09-18 00:45:24.502134 | orchestrator | 2025-09-18 00:45:24.502144 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-18 00:45:24.502155 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.140) 0:00:21.247 **** 2025-09-18 00:45:24.502166 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:24.502177 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:24.502187 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:24.502198 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:24.502209 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:24.502220 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:24.502230 | orchestrator | 2025-09-18 00:45:24.502241 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-18 00:45:24.502252 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:10.780) 0:00:32.027 **** 2025-09-18 00:45:24.502263 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:45:24.502274 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:45:24.502285 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:45:24.502295 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:45:24.502306 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:45:24.502324 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:45:24.502334 | orchestrator | 2025-09-18 00:45:24.502350 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-18 00:45:24.502361 | orchestrator | Thursday 18 September 2025 00:44:51 +0000 (0:00:01.922) 0:00:33.950 **** 2025-09-18 00:45:24.502372 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:24.502383 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:24.502394 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:24.502404 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:24.502415 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:24.502426 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:24.502436 | orchestrator | 2025-09-18 00:45:24.502447 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-18 00:45:24.502458 | orchestrator | Thursday 18 September 2025 00:45:00 +0000 (0:00:08.830) 0:00:42.780 **** 2025-09-18 00:45:24.502469 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-18 00:45:24.502487 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-18 00:45:24.502498 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-18 00:45:24.502510 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-18 00:45:24.502535 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-18 00:45:24.502546 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-18 00:45:24.502557 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-18 00:45:24.502568 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-18 00:45:24.502579 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-18 00:45:24.502590 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-18 00:45:24.502600 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-18 00:45:24.502611 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502622 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-18 00:45:24.502632 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502643 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502653 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502664 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502675 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 00:45:24.502685 | orchestrator | 2025-09-18 00:45:24.502696 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-18 00:45:24.502707 | orchestrator | Thursday 18 September 2025 00:45:09 +0000 (0:00:08.742) 0:00:51.523 **** 2025-09-18 00:45:24.502719 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-18 00:45:24.502730 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:24.502740 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-18 00:45:24.502758 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:24.502769 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-18 00:45:24.502780 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:24.502790 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-18 00:45:24.502801 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-18 00:45:24.502812 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-18 00:45:24.502823 | orchestrator | 2025-09-18 00:45:24.502833 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-18 00:45:24.502844 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:02.678) 0:00:54.202 **** 2025-09-18 00:45:24.502855 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-18 00:45:24.502866 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:45:24.502876 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-18 00:45:24.502887 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:45:24.502898 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-18 00:45:24.502908 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:45:24.502919 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-18 00:45:24.502930 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-18 00:45:24.502941 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-18 00:45:24.502951 | orchestrator | 2025-09-18 00:45:24.502962 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-18 00:45:24.502973 | orchestrator | Thursday 18 September 2025 00:45:15 +0000 (0:00:03.660) 0:00:57.863 **** 2025-09-18 00:45:24.502988 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:45:24.502999 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:45:24.503009 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:45:24.503020 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:45:24.503031 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:45:24.503042 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:45:24.503052 | orchestrator | 2025-09-18 00:45:24.503063 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:45:24.503074 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:45:24.503086 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:45:24.503102 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:45:24.503113 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 00:45:24.503124 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 00:45:24.503135 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 00:45:24.503146 | orchestrator | 2025-09-18 00:45:24.503156 | orchestrator | 2025-09-18 00:45:24.503167 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:45:24.503178 | orchestrator | Thursday 18 September 2025 00:45:23 +0000 (0:00:08.172) 0:01:06.035 **** 2025-09-18 00:45:24.503188 | orchestrator | =============================================================================== 2025-09-18 00:45:24.503199 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.00s 2025-09-18 00:45:24.503210 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.78s 2025-09-18 00:45:24.503227 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.74s 2025-09-18 00:45:24.503238 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.66s 2025-09-18 00:45:24.503249 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.54s 2025-09-18 00:45:24.503259 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.68s 2025-09-18 00:45:24.503270 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.57s 2025-09-18 00:45:24.503281 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.23s 2025-09-18 00:45:24.503291 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.05s 2025-09-18 00:45:24.503302 | orchestrator | module-load : Load modules ---------------------------------------------- 1.96s 2025-09-18 00:45:24.503313 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.92s 2025-09-18 00:45:24.503323 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.90s 2025-09-18 00:45:24.503334 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.42s 2025-09-18 00:45:24.503345 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.41s 2025-09-18 00:45:24.503355 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.25s 2025-09-18 00:45:24.503366 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-09-18 00:45:24.503376 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-09-18 00:45:24.503387 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.65s 2025-09-18 00:45:24.503397 | orchestrator | 2025-09-18 00:45:24 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:24.503408 | orchestrator | 2025-09-18 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:27.539749 | orchestrator | 2025-09-18 00:45:27 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:27.539964 | orchestrator | 2025-09-18 00:45:27 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:27.542750 | orchestrator | 2025-09-18 00:45:27 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:27.543377 | orchestrator | 2025-09-18 00:45:27 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:27.543399 | orchestrator | 2025-09-18 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:30.574913 | orchestrator | 2025-09-18 00:45:30 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:30.575733 | orchestrator | 2025-09-18 00:45:30 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:30.576969 | orchestrator | 2025-09-18 00:45:30 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:30.578111 | orchestrator | 2025-09-18 00:45:30 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:30.578167 | orchestrator | 2025-09-18 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:33.607266 | orchestrator | 2025-09-18 00:45:33 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:33.607361 | orchestrator | 2025-09-18 00:45:33 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:33.608045 | orchestrator | 2025-09-18 00:45:33 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:33.609346 | orchestrator | 2025-09-18 00:45:33 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:33.609381 | orchestrator | 2025-09-18 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:36.647658 | orchestrator | 2025-09-18 00:45:36 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:36.651168 | orchestrator | 2025-09-18 00:45:36 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:36.652534 | orchestrator | 2025-09-18 00:45:36 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:36.653338 | orchestrator | 2025-09-18 00:45:36 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:36.653362 | orchestrator | 2025-09-18 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:39.698756 | orchestrator | 2025-09-18 00:45:39 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:39.699952 | orchestrator | 2025-09-18 00:45:39 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:39.702259 | orchestrator | 2025-09-18 00:45:39 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:39.703493 | orchestrator | 2025-09-18 00:45:39 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:39.703520 | orchestrator | 2025-09-18 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:42.740946 | orchestrator | 2025-09-18 00:45:42 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:42.742351 | orchestrator | 2025-09-18 00:45:42 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:42.743200 | orchestrator | 2025-09-18 00:45:42 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:42.745386 | orchestrator | 2025-09-18 00:45:42 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:42.745413 | orchestrator | 2025-09-18 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:45.789880 | orchestrator | 2025-09-18 00:45:45 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:45.793442 | orchestrator | 2025-09-18 00:45:45 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:45.794453 | orchestrator | 2025-09-18 00:45:45 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:45.795289 | orchestrator | 2025-09-18 00:45:45 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:45.795316 | orchestrator | 2025-09-18 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:48.837877 | orchestrator | 2025-09-18 00:45:48 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:48.838459 | orchestrator | 2025-09-18 00:45:48 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:48.838573 | orchestrator | 2025-09-18 00:45:48 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:48.838597 | orchestrator | 2025-09-18 00:45:48 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:48.839458 | orchestrator | 2025-09-18 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:51.878935 | orchestrator | 2025-09-18 00:45:51 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:51.879035 | orchestrator | 2025-09-18 00:45:51 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:51.881347 | orchestrator | 2025-09-18 00:45:51 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:51.883698 | orchestrator | 2025-09-18 00:45:51 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:51.883737 | orchestrator | 2025-09-18 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:54.935998 | orchestrator | 2025-09-18 00:45:54 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:54.937061 | orchestrator | 2025-09-18 00:45:54 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:54.937913 | orchestrator | 2025-09-18 00:45:54 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:54.939051 | orchestrator | 2025-09-18 00:45:54 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:54.939135 | orchestrator | 2025-09-18 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:45:57.998285 | orchestrator | 2025-09-18 00:45:57 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:45:58.000390 | orchestrator | 2025-09-18 00:45:58 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:45:58.004043 | orchestrator | 2025-09-18 00:45:58 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:45:58.006291 | orchestrator | 2025-09-18 00:45:58 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:45:58.006810 | orchestrator | 2025-09-18 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:01.046691 | orchestrator | 2025-09-18 00:46:01 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:01.046756 | orchestrator | 2025-09-18 00:46:01 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:01.046974 | orchestrator | 2025-09-18 00:46:01 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:01.047783 | orchestrator | 2025-09-18 00:46:01 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:01.047970 | orchestrator | 2025-09-18 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:04.228416 | orchestrator | 2025-09-18 00:46:04 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:04.230867 | orchestrator | 2025-09-18 00:46:04 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:04.232841 | orchestrator | 2025-09-18 00:46:04 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:04.235029 | orchestrator | 2025-09-18 00:46:04 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:04.235073 | orchestrator | 2025-09-18 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:07.277778 | orchestrator | 2025-09-18 00:46:07 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:07.280500 | orchestrator | 2025-09-18 00:46:07 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:07.281241 | orchestrator | 2025-09-18 00:46:07 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:07.284190 | orchestrator | 2025-09-18 00:46:07 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:07.284216 | orchestrator | 2025-09-18 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:10.328928 | orchestrator | 2025-09-18 00:46:10 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:10.331748 | orchestrator | 2025-09-18 00:46:10 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:10.333906 | orchestrator | 2025-09-18 00:46:10 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:10.335550 | orchestrator | 2025-09-18 00:46:10 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:10.335578 | orchestrator | 2025-09-18 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:13.381207 | orchestrator | 2025-09-18 00:46:13 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:13.381317 | orchestrator | 2025-09-18 00:46:13 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:13.386376 | orchestrator | 2025-09-18 00:46:13 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:13.387449 | orchestrator | 2025-09-18 00:46:13 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:13.387524 | orchestrator | 2025-09-18 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:16.428783 | orchestrator | 2025-09-18 00:46:16 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:16.431664 | orchestrator | 2025-09-18 00:46:16 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:16.432976 | orchestrator | 2025-09-18 00:46:16 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:16.434857 | orchestrator | 2025-09-18 00:46:16 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:16.434908 | orchestrator | 2025-09-18 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:19.464968 | orchestrator | 2025-09-18 00:46:19 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:19.466847 | orchestrator | 2025-09-18 00:46:19 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:19.468604 | orchestrator | 2025-09-18 00:46:19 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:19.469987 | orchestrator | 2025-09-18 00:46:19 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:19.470011 | orchestrator | 2025-09-18 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:22.511376 | orchestrator | 2025-09-18 00:46:22 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:22.515974 | orchestrator | 2025-09-18 00:46:22 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:22.516967 | orchestrator | 2025-09-18 00:46:22 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:22.518368 | orchestrator | 2025-09-18 00:46:22 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:22.518563 | orchestrator | 2025-09-18 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:25.551417 | orchestrator | 2025-09-18 00:46:25 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:25.551595 | orchestrator | 2025-09-18 00:46:25 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:25.552501 | orchestrator | 2025-09-18 00:46:25 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:25.556049 | orchestrator | 2025-09-18 00:46:25 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:25.556062 | orchestrator | 2025-09-18 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:28.595350 | orchestrator | 2025-09-18 00:46:28 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:28.595658 | orchestrator | 2025-09-18 00:46:28 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:28.596534 | orchestrator | 2025-09-18 00:46:28 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:28.597006 | orchestrator | 2025-09-18 00:46:28 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:28.597029 | orchestrator | 2025-09-18 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:31.629368 | orchestrator | 2025-09-18 00:46:31 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:31.629701 | orchestrator | 2025-09-18 00:46:31 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:31.630712 | orchestrator | 2025-09-18 00:46:31 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:31.631507 | orchestrator | 2025-09-18 00:46:31 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:31.631547 | orchestrator | 2025-09-18 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:34.655796 | orchestrator | 2025-09-18 00:46:34 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:34.657015 | orchestrator | 2025-09-18 00:46:34 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:34.658324 | orchestrator | 2025-09-18 00:46:34 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:34.661275 | orchestrator | 2025-09-18 00:46:34 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:34.661678 | orchestrator | 2025-09-18 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:37.692961 | orchestrator | 2025-09-18 00:46:37 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:37.693279 | orchestrator | 2025-09-18 00:46:37 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:37.695245 | orchestrator | 2025-09-18 00:46:37 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:37.698391 | orchestrator | 2025-09-18 00:46:37 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:37.698667 | orchestrator | 2025-09-18 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:40.725500 | orchestrator | 2025-09-18 00:46:40 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:40.725882 | orchestrator | 2025-09-18 00:46:40 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:40.727202 | orchestrator | 2025-09-18 00:46:40 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:40.731119 | orchestrator | 2025-09-18 00:46:40 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:40.731997 | orchestrator | 2025-09-18 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:43.763390 | orchestrator | 2025-09-18 00:46:43 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:43.765315 | orchestrator | 2025-09-18 00:46:43 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:43.766867 | orchestrator | 2025-09-18 00:46:43 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:43.769068 | orchestrator | 2025-09-18 00:46:43 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:43.769162 | orchestrator | 2025-09-18 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:46.802687 | orchestrator | 2025-09-18 00:46:46 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:46.802799 | orchestrator | 2025-09-18 00:46:46 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:46.803056 | orchestrator | 2025-09-18 00:46:46 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:46.803591 | orchestrator | 2025-09-18 00:46:46 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:46.803615 | orchestrator | 2025-09-18 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:49.928015 | orchestrator | 2025-09-18 00:46:49 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:49.928113 | orchestrator | 2025-09-18 00:46:49 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:49.928128 | orchestrator | 2025-09-18 00:46:49 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:49.928140 | orchestrator | 2025-09-18 00:46:49 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:49.928151 | orchestrator | 2025-09-18 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:52.882229 | orchestrator | 2025-09-18 00:46:52 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:52.882336 | orchestrator | 2025-09-18 00:46:52 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:52.883454 | orchestrator | 2025-09-18 00:46:52 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:52.884224 | orchestrator | 2025-09-18 00:46:52 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:52.884249 | orchestrator | 2025-09-18 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:55.935683 | orchestrator | 2025-09-18 00:46:55 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:55.936799 | orchestrator | 2025-09-18 00:46:55 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:55.939422 | orchestrator | 2025-09-18 00:46:55 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:55.940331 | orchestrator | 2025-09-18 00:46:55 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:55.940654 | orchestrator | 2025-09-18 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:46:58.979317 | orchestrator | 2025-09-18 00:46:58 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:46:58.980013 | orchestrator | 2025-09-18 00:46:58 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:46:58.981592 | orchestrator | 2025-09-18 00:46:58 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:46:58.983070 | orchestrator | 2025-09-18 00:46:58 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state STARTED 2025-09-18 00:46:58.983290 | orchestrator | 2025-09-18 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:02.021702 | orchestrator | 2025-09-18 00:47:02 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:02.022215 | orchestrator | 2025-09-18 00:47:02 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:02.023179 | orchestrator | 2025-09-18 00:47:02 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:02.036685 | orchestrator | 2025-09-18 00:47:02 | INFO  | Task 01f49c6d-b8a3-42f0-b089-adbbd8474aa5 is in state SUCCESS 2025-09-18 00:47:02.036723 | orchestrator | 2025-09-18 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:02.037607 | orchestrator | 2025-09-18 00:47:02.037633 | orchestrator | 2025-09-18 00:47:02.037646 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-18 00:47:02.037661 | orchestrator | 2025-09-18 00:47:02.037674 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-18 00:47:02.037685 | orchestrator | Thursday 18 September 2025 00:44:36 +0000 (0:00:00.110) 0:00:00.110 **** 2025-09-18 00:47:02.037697 | orchestrator | ok: [localhost] => { 2025-09-18 00:47:02.037710 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-18 00:47:02.037721 | orchestrator | } 2025-09-18 00:47:02.037732 | orchestrator | 2025-09-18 00:47:02.037743 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-18 00:47:02.037754 | orchestrator | Thursday 18 September 2025 00:44:36 +0000 (0:00:00.036) 0:00:00.147 **** 2025-09-18 00:47:02.037785 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-18 00:47:02.037798 | orchestrator | ...ignoring 2025-09-18 00:47:02.037810 | orchestrator | 2025-09-18 00:47:02.037821 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-18 00:47:02.037832 | orchestrator | Thursday 18 September 2025 00:44:39 +0000 (0:00:03.113) 0:00:03.260 **** 2025-09-18 00:47:02.037843 | orchestrator | skipping: [localhost] 2025-09-18 00:47:02.037854 | orchestrator | 2025-09-18 00:47:02.037864 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-18 00:47:02.037875 | orchestrator | Thursday 18 September 2025 00:44:40 +0000 (0:00:00.213) 0:00:03.474 **** 2025-09-18 00:47:02.037886 | orchestrator | ok: [localhost] 2025-09-18 00:47:02.037897 | orchestrator | 2025-09-18 00:47:02.037908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:47:02.037919 | orchestrator | 2025-09-18 00:47:02.037929 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:47:02.037941 | orchestrator | Thursday 18 September 2025 00:44:40 +0000 (0:00:00.764) 0:00:04.238 **** 2025-09-18 00:47:02.037952 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:02.037963 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:02.037974 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:02.037985 | orchestrator | 2025-09-18 00:47:02.037996 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:47:02.038006 | orchestrator | Thursday 18 September 2025 00:44:42 +0000 (0:00:01.531) 0:00:05.770 **** 2025-09-18 00:47:02.038068 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-18 00:47:02.038083 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-18 00:47:02.038094 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-18 00:47:02.038105 | orchestrator | 2025-09-18 00:47:02.038115 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-18 00:47:02.038126 | orchestrator | 2025-09-18 00:47:02.038137 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 00:47:02.038148 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.889) 0:00:06.660 **** 2025-09-18 00:47:02.038158 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:02.038169 | orchestrator | 2025-09-18 00:47:02.038180 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-18 00:47:02.038208 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:00.399) 0:00:07.060 **** 2025-09-18 00:47:02.038219 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:02.038230 | orchestrator | 2025-09-18 00:47:02.038241 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-18 00:47:02.038252 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.988) 0:00:08.048 **** 2025-09-18 00:47:02.038262 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038274 | orchestrator | 2025-09-18 00:47:02.038285 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-18 00:47:02.038295 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:00.469) 0:00:08.517 **** 2025-09-18 00:47:02.038306 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038317 | orchestrator | 2025-09-18 00:47:02.038327 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-18 00:47:02.038338 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:00.338) 0:00:08.855 **** 2025-09-18 00:47:02.038349 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038360 | orchestrator | 2025-09-18 00:47:02.038371 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-18 00:47:02.038382 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:00.347) 0:00:09.203 **** 2025-09-18 00:47:02.038393 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038404 | orchestrator | 2025-09-18 00:47:02.038415 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 00:47:02.038444 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.482) 0:00:09.685 **** 2025-09-18 00:47:02.038455 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:02.038466 | orchestrator | 2025-09-18 00:47:02.038476 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-18 00:47:02.038487 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.960) 0:00:10.645 **** 2025-09-18 00:47:02.038498 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:02.038509 | orchestrator | 2025-09-18 00:47:02.038519 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-18 00:47:02.038530 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:01.019) 0:00:11.664 **** 2025-09-18 00:47:02.038541 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038552 | orchestrator | 2025-09-18 00:47:02.038563 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-18 00:47:02.038574 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.385) 0:00:12.050 **** 2025-09-18 00:47:02.038585 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.038595 | orchestrator | 2025-09-18 00:47:02.038616 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-18 00:47:02.038628 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.363) 0:00:12.414 **** 2025-09-18 00:47:02.038651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038701 | orchestrator | 2025-09-18 00:47:02.038712 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-18 00:47:02.038723 | orchestrator | Thursday 18 September 2025 00:44:50 +0000 (0:00:01.148) 0:00:13.562 **** 2025-09-18 00:47:02.038744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.038793 | orchestrator | 2025-09-18 00:47:02.038804 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-18 00:47:02.038815 | orchestrator | Thursday 18 September 2025 00:44:53 +0000 (0:00:03.749) 0:00:17.311 **** 2025-09-18 00:47:02.038825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 00:47:02.038837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 00:47:02.038848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 00:47:02.038858 | orchestrator | 2025-09-18 00:47:02.038869 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-18 00:47:02.038880 | orchestrator | Thursday 18 September 2025 00:44:55 +0000 (0:00:01.645) 0:00:18.957 **** 2025-09-18 00:47:02.038891 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 00:47:02.038902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 00:47:02.038912 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 00:47:02.038923 | orchestrator | 2025-09-18 00:47:02.038933 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-18 00:47:02.038944 | orchestrator | Thursday 18 September 2025 00:44:58 +0000 (0:00:02.975) 0:00:21.932 **** 2025-09-18 00:47:02.038955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 00:47:02.038965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 00:47:02.038976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 00:47:02.038987 | orchestrator | 2025-09-18 00:47:02.038998 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-18 00:47:02.039009 | orchestrator | Thursday 18 September 2025 00:45:01 +0000 (0:00:03.001) 0:00:24.934 **** 2025-09-18 00:47:02.039025 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 00:47:02.039037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 00:47:02.039048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 00:47:02.039065 | orchestrator | 2025-09-18 00:47:02.039076 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-18 00:47:02.039087 | orchestrator | Thursday 18 September 2025 00:45:04 +0000 (0:00:03.484) 0:00:28.418 **** 2025-09-18 00:47:02.039098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 00:47:02.039113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 00:47:02.039124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 00:47:02.039135 | orchestrator | 2025-09-18 00:47:02.039146 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-18 00:47:02.039157 | orchestrator | Thursday 18 September 2025 00:45:06 +0000 (0:00:01.737) 0:00:30.155 **** 2025-09-18 00:47:02.039167 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 00:47:02.039178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 00:47:02.039189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 00:47:02.039200 | orchestrator | 2025-09-18 00:47:02.039210 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 00:47:02.039221 | orchestrator | Thursday 18 September 2025 00:45:08 +0000 (0:00:02.279) 0:00:32.435 **** 2025-09-18 00:47:02.039232 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:02.039243 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.039253 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:02.039264 | orchestrator | 2025-09-18 00:47:02.039275 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-18 00:47:02.039285 | orchestrator | Thursday 18 September 2025 00:45:09 +0000 (0:00:01.002) 0:00:33.437 **** 2025-09-18 00:47:02.039297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.039309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.039342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:47:02.039354 | orchestrator | 2025-09-18 00:47:02.039365 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-18 00:47:02.039376 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:01.595) 0:00:35.033 **** 2025-09-18 00:47:02.039386 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:02.039397 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:02.039408 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:02.039419 | orchestrator | 2025-09-18 00:47:02.039462 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-18 00:47:02.039473 | orchestrator | Thursday 18 September 2025 00:45:12 +0000 (0:00:01.309) 0:00:36.342 **** 2025-09-18 00:47:02.039484 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:02.039494 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:02.039505 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:02.039515 | orchestrator | 2025-09-18 00:47:02.039526 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-18 00:47:02.039537 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:07.611) 0:00:43.954 **** 2025-09-18 00:47:02.039547 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:02.039558 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:02.039569 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:02.039579 | orchestrator | 2025-09-18 00:47:02.039590 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 00:47:02.039600 | orchestrator | 2025-09-18 00:47:02.039611 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 00:47:02.039622 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.256) 0:00:44.210 **** 2025-09-18 00:47:02.039633 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:02.039643 | orchestrator | 2025-09-18 00:47:02.039654 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 00:47:02.039665 | orchestrator | Thursday 18 September 2025 00:45:21 +0000 (0:00:00.557) 0:00:44.768 **** 2025-09-18 00:47:02.039675 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:02.039686 | orchestrator | 2025-09-18 00:47:02.039697 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 00:47:02.039707 | orchestrator | Thursday 18 September 2025 00:45:21 +0000 (0:00:00.186) 0:00:44.954 **** 2025-09-18 00:47:02.039718 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:02.039729 | orchestrator | 2025-09-18 00:47:02.039739 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 00:47:02.039750 | orchestrator | Thursday 18 September 2025 00:45:23 +0000 (0:00:01.586) 0:00:46.541 **** 2025-09-18 00:47:02.039761 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:02.039771 | orchestrator | 2025-09-18 00:47:02.039782 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 00:47:02.039793 | orchestrator | 2025-09-18 00:47:02.039804 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 00:47:02.039822 | orchestrator | Thursday 18 September 2025 00:46:18 +0000 (0:00:55.906) 0:01:42.448 **** 2025-09-18 00:47:02.039832 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:02.039843 | orchestrator | 2025-09-18 00:47:02.039854 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 00:47:02.039864 | orchestrator | Thursday 18 September 2025 00:46:19 +0000 (0:00:00.617) 0:01:43.065 **** 2025-09-18 00:47:02.039875 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:02.039886 | orchestrator | 2025-09-18 00:47:02.039896 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 00:47:02.039907 | orchestrator | Thursday 18 September 2025 00:46:19 +0000 (0:00:00.223) 0:01:43.289 **** 2025-09-18 00:47:02.039917 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:02.039928 | orchestrator | 2025-09-18 00:47:02.039939 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 00:47:02.039949 | orchestrator | Thursday 18 September 2025 00:46:21 +0000 (0:00:02.072) 0:01:45.361 **** 2025-09-18 00:47:02.039960 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:02.039970 | orchestrator | 2025-09-18 00:47:02.039981 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 00:47:02.039992 | orchestrator | 2025-09-18 00:47:02.040003 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 00:47:02.040014 | orchestrator | Thursday 18 September 2025 00:46:37 +0000 (0:00:15.733) 0:02:01.095 **** 2025-09-18 00:47:02.040024 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:02.040035 | orchestrator | 2025-09-18 00:47:02.040045 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 00:47:02.040056 | orchestrator | Thursday 18 September 2025 00:46:38 +0000 (0:00:00.713) 0:02:01.808 **** 2025-09-18 00:47:02.040067 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:02.040077 | orchestrator | 2025-09-18 00:47:02.040088 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 00:47:02.040099 | orchestrator | Thursday 18 September 2025 00:46:38 +0000 (0:00:00.192) 0:02:02.000 **** 2025-09-18 00:47:02.040109 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:02.040120 | orchestrator | 2025-09-18 00:47:02.040131 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 00:47:02.040148 | orchestrator | Thursday 18 September 2025 00:46:40 +0000 (0:00:01.652) 0:02:03.653 **** 2025-09-18 00:47:02.040159 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:02.040169 | orchestrator | 2025-09-18 00:47:02.040180 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-18 00:47:02.040191 | orchestrator | 2025-09-18 00:47:02.040201 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-18 00:47:02.040212 | orchestrator | Thursday 18 September 2025 00:46:56 +0000 (0:00:16.299) 0:02:19.953 **** 2025-09-18 00:47:02.040223 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:02.040234 | orchestrator | 2025-09-18 00:47:02.040244 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-18 00:47:02.040255 | orchestrator | Thursday 18 September 2025 00:46:57 +0000 (0:00:00.553) 0:02:20.506 **** 2025-09-18 00:47:02.040270 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-18 00:47:02.040281 | orchestrator | enable_outward_rabbitmq_True 2025-09-18 00:47:02.040292 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-18 00:47:02.040303 | orchestrator | outward_rabbitmq_restart 2025-09-18 00:47:02.040313 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:02.040324 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:02.040335 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:02.040345 | orchestrator | 2025-09-18 00:47:02.040356 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-18 00:47:02.040367 | orchestrator | skipping: no hosts matched 2025-09-18 00:47:02.040378 | orchestrator | 2025-09-18 00:47:02.040395 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-18 00:47:02.040405 | orchestrator | skipping: no hosts matched 2025-09-18 00:47:02.040416 | orchestrator | 2025-09-18 00:47:02.040482 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-18 00:47:02.040493 | orchestrator | skipping: no hosts matched 2025-09-18 00:47:02.040504 | orchestrator | 2025-09-18 00:47:02.040515 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:47:02.040526 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-18 00:47:02.040538 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 00:47:02.040549 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:47:02.040560 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:47:02.040571 | orchestrator | 2025-09-18 00:47:02.040581 | orchestrator | 2025-09-18 00:47:02.040592 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:47:02.040603 | orchestrator | Thursday 18 September 2025 00:46:59 +0000 (0:00:02.757) 0:02:23.264 **** 2025-09-18 00:47:02.040614 | orchestrator | =============================================================================== 2025-09-18 00:47:02.040624 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.94s 2025-09-18 00:47:02.040635 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.61s 2025-09-18 00:47:02.040646 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.31s 2025-09-18 00:47:02.040656 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.75s 2025-09-18 00:47:02.040667 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.48s 2025-09-18 00:47:02.040678 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.11s 2025-09-18 00:47:02.040688 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.00s 2025-09-18 00:47:02.040699 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.97s 2025-09-18 00:47:02.040710 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.76s 2025-09-18 00:47:02.040720 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.28s 2025-09-18 00:47:02.040731 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.89s 2025-09-18 00:47:02.040742 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.74s 2025-09-18 00:47:02.040752 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.65s 2025-09-18 00:47:02.040763 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.60s 2025-09-18 00:47:02.040774 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.53s 2025-09-18 00:47:02.040784 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.31s 2025-09-18 00:47:02.040795 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.15s 2025-09-18 00:47:02.040806 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2025-09-18 00:47:02.040816 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.00s 2025-09-18 00:47:02.040827 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2025-09-18 00:47:05.068913 | orchestrator | 2025-09-18 00:47:05 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:05.069125 | orchestrator | 2025-09-18 00:47:05 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:05.070906 | orchestrator | 2025-09-18 00:47:05 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:05.070955 | orchestrator | 2025-09-18 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:08.105956 | orchestrator | 2025-09-18 00:47:08 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:08.108584 | orchestrator | 2025-09-18 00:47:08 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:08.111717 | orchestrator | 2025-09-18 00:47:08 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:08.111766 | orchestrator | 2025-09-18 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:11.161834 | orchestrator | 2025-09-18 00:47:11 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:11.163446 | orchestrator | 2025-09-18 00:47:11 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:11.165516 | orchestrator | 2025-09-18 00:47:11 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:11.166213 | orchestrator | 2025-09-18 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:14.205934 | orchestrator | 2025-09-18 00:47:14 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:14.206577 | orchestrator | 2025-09-18 00:47:14 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:14.208047 | orchestrator | 2025-09-18 00:47:14 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:14.208130 | orchestrator | 2025-09-18 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:17.244692 | orchestrator | 2025-09-18 00:47:17 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:17.245296 | orchestrator | 2025-09-18 00:47:17 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:17.246449 | orchestrator | 2025-09-18 00:47:17 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:17.246481 | orchestrator | 2025-09-18 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:20.287816 | orchestrator | 2025-09-18 00:47:20 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:20.289352 | orchestrator | 2025-09-18 00:47:20 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:20.290640 | orchestrator | 2025-09-18 00:47:20 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:20.290670 | orchestrator | 2025-09-18 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:23.329235 | orchestrator | 2025-09-18 00:47:23 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:23.331429 | orchestrator | 2025-09-18 00:47:23 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:23.332352 | orchestrator | 2025-09-18 00:47:23 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:23.332548 | orchestrator | 2025-09-18 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:26.372389 | orchestrator | 2025-09-18 00:47:26 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:26.374191 | orchestrator | 2025-09-18 00:47:26 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:26.374645 | orchestrator | 2025-09-18 00:47:26 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:26.374708 | orchestrator | 2025-09-18 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:29.416339 | orchestrator | 2025-09-18 00:47:29 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:29.419305 | orchestrator | 2025-09-18 00:47:29 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:29.422623 | orchestrator | 2025-09-18 00:47:29 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:29.423085 | orchestrator | 2025-09-18 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:32.467819 | orchestrator | 2025-09-18 00:47:32 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:32.469243 | orchestrator | 2025-09-18 00:47:32 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:32.471442 | orchestrator | 2025-09-18 00:47:32 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:32.471490 | orchestrator | 2025-09-18 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:35.518618 | orchestrator | 2025-09-18 00:47:35 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:35.520817 | orchestrator | 2025-09-18 00:47:35 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:35.522839 | orchestrator | 2025-09-18 00:47:35 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:35.523006 | orchestrator | 2025-09-18 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:38.565912 | orchestrator | 2025-09-18 00:47:38 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:38.566429 | orchestrator | 2025-09-18 00:47:38 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:38.566488 | orchestrator | 2025-09-18 00:47:38 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:38.566506 | orchestrator | 2025-09-18 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:41.597339 | orchestrator | 2025-09-18 00:47:41 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:41.598937 | orchestrator | 2025-09-18 00:47:41 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:41.601322 | orchestrator | 2025-09-18 00:47:41 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:41.601347 | orchestrator | 2025-09-18 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:44.641308 | orchestrator | 2025-09-18 00:47:44 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:44.642829 | orchestrator | 2025-09-18 00:47:44 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:44.644605 | orchestrator | 2025-09-18 00:47:44 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:44.645554 | orchestrator | 2025-09-18 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:47.694330 | orchestrator | 2025-09-18 00:47:47 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:47.694479 | orchestrator | 2025-09-18 00:47:47 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:47.694819 | orchestrator | 2025-09-18 00:47:47 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:47.694962 | orchestrator | 2025-09-18 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:50.742726 | orchestrator | 2025-09-18 00:47:50 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:50.742814 | orchestrator | 2025-09-18 00:47:50 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:50.746408 | orchestrator | 2025-09-18 00:47:50 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state STARTED 2025-09-18 00:47:50.746448 | orchestrator | 2025-09-18 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:53.797287 | orchestrator | 2025-09-18 00:47:53 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:53.799765 | orchestrator | 2025-09-18 00:47:53 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:53.800760 | orchestrator | 2025-09-18 00:47:53 | INFO  | Task 629c1652-ccdb-43ad-9ebc-0526426ab5b6 is in state SUCCESS 2025-09-18 00:47:53.800794 | orchestrator | 2025-09-18 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:53.804762 | orchestrator | 2025-09-18 00:47:53.804797 | orchestrator | 2025-09-18 00:47:53.804809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:47:53.804821 | orchestrator | 2025-09-18 00:47:53.804832 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:47:53.804844 | orchestrator | Thursday 18 September 2025 00:45:27 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-18 00:47:53.804855 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.805014 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.805031 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.805042 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:47:53.805053 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:47:53.805064 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:47:53.805075 | orchestrator | 2025-09-18 00:47:53.805087 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:47:53.805098 | orchestrator | Thursday 18 September 2025 00:45:28 +0000 (0:00:00.636) 0:00:00.899 **** 2025-09-18 00:47:53.805109 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-18 00:47:53.805120 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-18 00:47:53.805131 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-18 00:47:53.805142 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-18 00:47:53.805153 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-18 00:47:53.805164 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-18 00:47:53.805175 | orchestrator | 2025-09-18 00:47:53.805492 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-18 00:47:53.805506 | orchestrator | 2025-09-18 00:47:53.805517 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-18 00:47:53.805529 | orchestrator | Thursday 18 September 2025 00:45:28 +0000 (0:00:00.712) 0:00:01.611 **** 2025-09-18 00:47:53.805558 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:47:53.805571 | orchestrator | 2025-09-18 00:47:53.805583 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-18 00:47:53.805594 | orchestrator | Thursday 18 September 2025 00:45:29 +0000 (0:00:01.003) 0:00:02.614 **** 2025-09-18 00:47:53.805608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805700 | orchestrator | 2025-09-18 00:47:53.805723 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-18 00:47:53.805735 | orchestrator | Thursday 18 September 2025 00:45:31 +0000 (0:00:01.165) 0:00:03.780 **** 2025-09-18 00:47:53.805746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805937 | orchestrator | 2025-09-18 00:47:53.805948 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-18 00:47:53.805959 | orchestrator | Thursday 18 September 2025 00:45:33 +0000 (0:00:02.029) 0:00:05.809 **** 2025-09-18 00:47:53.805970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.805981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806136 | orchestrator | 2025-09-18 00:47:53.806150 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-18 00:47:53.806163 | orchestrator | Thursday 18 September 2025 00:45:34 +0000 (0:00:01.263) 0:00:07.072 **** 2025-09-18 00:47:53.806176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806256 | orchestrator | 2025-09-18 00:47:53.806277 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-18 00:47:53.806290 | orchestrator | Thursday 18 September 2025 00:45:35 +0000 (0:00:01.418) 0:00:08.491 **** 2025-09-18 00:47:53.806303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.806417 | orchestrator | 2025-09-18 00:47:53.806429 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-18 00:47:53.806441 | orchestrator | Thursday 18 September 2025 00:45:37 +0000 (0:00:01.514) 0:00:10.005 **** 2025-09-18 00:47:53.806452 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.806464 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.806475 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:47:53.806485 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.806496 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:47:53.806507 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:47:53.806518 | orchestrator | 2025-09-18 00:47:53.806528 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-18 00:47:53.806539 | orchestrator | Thursday 18 September 2025 00:45:40 +0000 (0:00:02.856) 0:00:12.862 **** 2025-09-18 00:47:53.806550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-18 00:47:53.806561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-18 00:47:53.806572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-18 00:47:53.806583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-18 00:47:53.806594 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-18 00:47:53.806604 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-18 00:47:53.806615 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806626 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806654 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806674 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 00:47:53.806696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806740 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806751 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 00:47:53.806762 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806778 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806811 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806822 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 00:47:53.806833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806843 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806854 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806865 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806876 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 00:47:53.806898 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 00:47:53.806962 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 00:47:53.806973 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 00:47:53.806984 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 00:47:53.807002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 00:47:53.807013 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 00:47:53.807024 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 00:47:53.807035 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-18 00:47:53.807046 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-18 00:47:53.807063 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-18 00:47:53.807074 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-18 00:47:53.807085 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-18 00:47:53.807096 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-18 00:47:53.807107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 00:47:53.807117 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 00:47:53.807129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 00:47:53.807140 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 00:47:53.807150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 00:47:53.807161 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 00:47:53.807172 | orchestrator | 2025-09-18 00:47:53.807183 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807199 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:18.253) 0:00:31.116 **** 2025-09-18 00:47:53.807210 | orchestrator | 2025-09-18 00:47:53.807221 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807232 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.237) 0:00:31.353 **** 2025-09-18 00:47:53.807243 | orchestrator | 2025-09-18 00:47:53.807254 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807265 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.061) 0:00:31.415 **** 2025-09-18 00:47:53.807275 | orchestrator | 2025-09-18 00:47:53.807286 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807297 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.064) 0:00:31.480 **** 2025-09-18 00:47:53.807308 | orchestrator | 2025-09-18 00:47:53.807319 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807329 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.063) 0:00:31.543 **** 2025-09-18 00:47:53.807340 | orchestrator | 2025-09-18 00:47:53.807351 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 00:47:53.807362 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.063) 0:00:31.607 **** 2025-09-18 00:47:53.807391 | orchestrator | 2025-09-18 00:47:53.807402 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-18 00:47:53.807413 | orchestrator | Thursday 18 September 2025 00:45:59 +0000 (0:00:00.086) 0:00:31.694 **** 2025-09-18 00:47:53.807431 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:47:53.807442 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:47:53.807453 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.807464 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.807475 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.807485 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:47:53.807496 | orchestrator | 2025-09-18 00:47:53.807507 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-18 00:47:53.807518 | orchestrator | Thursday 18 September 2025 00:46:00 +0000 (0:00:01.528) 0:00:33.223 **** 2025-09-18 00:47:53.807529 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.807539 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.807550 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.807561 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:47:53.807571 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:47:53.807582 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:47:53.807593 | orchestrator | 2025-09-18 00:47:53.807603 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-18 00:47:53.807614 | orchestrator | 2025-09-18 00:47:53.807625 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 00:47:53.807636 | orchestrator | Thursday 18 September 2025 00:46:28 +0000 (0:00:28.312) 0:01:01.536 **** 2025-09-18 00:47:53.807647 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:53.807658 | orchestrator | 2025-09-18 00:47:53.807668 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 00:47:53.807679 | orchestrator | Thursday 18 September 2025 00:46:29 +0000 (0:00:00.765) 0:01:02.302 **** 2025-09-18 00:47:53.807690 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:53.807701 | orchestrator | 2025-09-18 00:47:53.807712 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-18 00:47:53.807723 | orchestrator | Thursday 18 September 2025 00:46:30 +0000 (0:00:00.626) 0:01:02.928 **** 2025-09-18 00:47:53.807734 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.807744 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.807755 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.807766 | orchestrator | 2025-09-18 00:47:53.807777 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-18 00:47:53.807787 | orchestrator | Thursday 18 September 2025 00:46:31 +0000 (0:00:01.070) 0:01:03.998 **** 2025-09-18 00:47:53.807798 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.807809 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.807820 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.807836 | orchestrator | 2025-09-18 00:47:53.807847 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-18 00:47:53.807858 | orchestrator | Thursday 18 September 2025 00:46:31 +0000 (0:00:00.353) 0:01:04.352 **** 2025-09-18 00:47:53.807869 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.807879 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.807890 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.807901 | orchestrator | 2025-09-18 00:47:53.807912 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-18 00:47:53.807923 | orchestrator | Thursday 18 September 2025 00:46:32 +0000 (0:00:00.362) 0:01:04.715 **** 2025-09-18 00:47:53.807933 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.807944 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.807955 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.807965 | orchestrator | 2025-09-18 00:47:53.807976 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-18 00:47:53.807987 | orchestrator | Thursday 18 September 2025 00:46:32 +0000 (0:00:00.433) 0:01:05.148 **** 2025-09-18 00:47:53.807998 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.808009 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.808025 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.808036 | orchestrator | 2025-09-18 00:47:53.808047 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-18 00:47:53.808058 | orchestrator | Thursday 18 September 2025 00:46:33 +0000 (0:00:00.687) 0:01:05.835 **** 2025-09-18 00:47:53.808069 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808080 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808091 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808101 | orchestrator | 2025-09-18 00:47:53.808112 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-18 00:47:53.808123 | orchestrator | Thursday 18 September 2025 00:46:33 +0000 (0:00:00.340) 0:01:06.176 **** 2025-09-18 00:47:53.808134 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808145 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808155 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808166 | orchestrator | 2025-09-18 00:47:53.808182 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-18 00:47:53.808193 | orchestrator | Thursday 18 September 2025 00:46:33 +0000 (0:00:00.344) 0:01:06.521 **** 2025-09-18 00:47:53.808204 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808215 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808226 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808237 | orchestrator | 2025-09-18 00:47:53.808247 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-18 00:47:53.808258 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:00.290) 0:01:06.812 **** 2025-09-18 00:47:53.808269 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808280 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808290 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808301 | orchestrator | 2025-09-18 00:47:53.808312 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-18 00:47:53.808323 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:00.392) 0:01:07.204 **** 2025-09-18 00:47:53.808334 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808344 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808355 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808365 | orchestrator | 2025-09-18 00:47:53.808439 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-18 00:47:53.808453 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:00.264) 0:01:07.468 **** 2025-09-18 00:47:53.808464 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808474 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808485 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808495 | orchestrator | 2025-09-18 00:47:53.808506 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-18 00:47:53.808517 | orchestrator | Thursday 18 September 2025 00:46:35 +0000 (0:00:00.267) 0:01:07.736 **** 2025-09-18 00:47:53.808528 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808538 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808549 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808559 | orchestrator | 2025-09-18 00:47:53.808568 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-18 00:47:53.808578 | orchestrator | Thursday 18 September 2025 00:46:35 +0000 (0:00:00.283) 0:01:08.020 **** 2025-09-18 00:47:53.808587 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808597 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808606 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808616 | orchestrator | 2025-09-18 00:47:53.808625 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-18 00:47:53.808635 | orchestrator | Thursday 18 September 2025 00:46:35 +0000 (0:00:00.248) 0:01:08.268 **** 2025-09-18 00:47:53.808644 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808654 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808670 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808679 | orchestrator | 2025-09-18 00:47:53.808689 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-18 00:47:53.808699 | orchestrator | Thursday 18 September 2025 00:46:36 +0000 (0:00:00.416) 0:01:08.685 **** 2025-09-18 00:47:53.808708 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808718 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808727 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808737 | orchestrator | 2025-09-18 00:47:53.808746 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-18 00:47:53.808756 | orchestrator | Thursday 18 September 2025 00:46:36 +0000 (0:00:00.258) 0:01:08.944 **** 2025-09-18 00:47:53.808765 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808775 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808784 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808793 | orchestrator | 2025-09-18 00:47:53.808803 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-18 00:47:53.808812 | orchestrator | Thursday 18 September 2025 00:46:36 +0000 (0:00:00.256) 0:01:09.200 **** 2025-09-18 00:47:53.808822 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.808832 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.808847 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.808856 | orchestrator | 2025-09-18 00:47:53.808866 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 00:47:53.808876 | orchestrator | Thursday 18 September 2025 00:46:36 +0000 (0:00:00.269) 0:01:09.470 **** 2025-09-18 00:47:53.808885 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:47:53.808895 | orchestrator | 2025-09-18 00:47:53.808905 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-18 00:47:53.808914 | orchestrator | Thursday 18 September 2025 00:46:37 +0000 (0:00:00.633) 0:01:10.104 **** 2025-09-18 00:47:53.808924 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.808933 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.808943 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.808952 | orchestrator | 2025-09-18 00:47:53.808963 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-18 00:47:53.808980 | orchestrator | Thursday 18 September 2025 00:46:37 +0000 (0:00:00.371) 0:01:10.475 **** 2025-09-18 00:47:53.808997 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.809012 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.809028 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.809043 | orchestrator | 2025-09-18 00:47:53.809058 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-18 00:47:53.809074 | orchestrator | Thursday 18 September 2025 00:46:38 +0000 (0:00:00.386) 0:01:10.861 **** 2025-09-18 00:47:53.809090 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809105 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809122 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809133 | orchestrator | 2025-09-18 00:47:53.809142 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-18 00:47:53.809152 | orchestrator | Thursday 18 September 2025 00:46:38 +0000 (0:00:00.455) 0:01:11.317 **** 2025-09-18 00:47:53.809161 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809171 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809186 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809196 | orchestrator | 2025-09-18 00:47:53.809206 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-18 00:47:53.809216 | orchestrator | Thursday 18 September 2025 00:46:39 +0000 (0:00:00.373) 0:01:11.691 **** 2025-09-18 00:47:53.809225 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809235 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809244 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809254 | orchestrator | 2025-09-18 00:47:53.809263 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-18 00:47:53.809284 | orchestrator | Thursday 18 September 2025 00:46:39 +0000 (0:00:00.430) 0:01:12.122 **** 2025-09-18 00:47:53.809294 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809303 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809313 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809322 | orchestrator | 2025-09-18 00:47:53.809332 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-18 00:47:53.809341 | orchestrator | Thursday 18 September 2025 00:46:39 +0000 (0:00:00.394) 0:01:12.516 **** 2025-09-18 00:47:53.809351 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809360 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809369 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809398 | orchestrator | 2025-09-18 00:47:53.809408 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-18 00:47:53.809418 | orchestrator | Thursday 18 September 2025 00:46:40 +0000 (0:00:00.807) 0:01:13.324 **** 2025-09-18 00:47:53.809428 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.809437 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.809447 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.809456 | orchestrator | 2025-09-18 00:47:53.809466 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-18 00:47:53.809476 | orchestrator | Thursday 18 September 2025 00:46:41 +0000 (0:00:00.400) 0:01:13.724 **** 2025-09-18 00:47:53.809486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809600 | orchestrator | 2025-09-18 00:47:53.809610 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-18 00:47:53.809620 | orchestrator | Thursday 18 September 2025 00:46:42 +0000 (0:00:01.528) 0:01:15.253 **** 2025-09-18 00:47:53.809630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809736 | orchestrator | 2025-09-18 00:47:53.809746 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-18 00:47:53.809756 | orchestrator | Thursday 18 September 2025 00:46:46 +0000 (0:00:04.055) 0:01:19.309 **** 2025-09-18 00:47:53.809766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.809868 | orchestrator | 2025-09-18 00:47:53.809882 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.809892 | orchestrator | Thursday 18 September 2025 00:46:48 +0000 (0:00:02.275) 0:01:21.584 **** 2025-09-18 00:47:53.809902 | orchestrator | 2025-09-18 00:47:53.809912 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.809921 | orchestrator | Thursday 18 September 2025 00:46:49 +0000 (0:00:00.109) 0:01:21.694 **** 2025-09-18 00:47:53.809931 | orchestrator | 2025-09-18 00:47:53.809941 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.809950 | orchestrator | Thursday 18 September 2025 00:46:49 +0000 (0:00:00.063) 0:01:21.757 **** 2025-09-18 00:47:53.809960 | orchestrator | 2025-09-18 00:47:53.809969 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-18 00:47:53.809979 | orchestrator | Thursday 18 September 2025 00:46:49 +0000 (0:00:00.070) 0:01:21.828 **** 2025-09-18 00:47:53.809989 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.809998 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.810008 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.810045 | orchestrator | 2025-09-18 00:47:53.810057 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-18 00:47:53.810067 | orchestrator | Thursday 18 September 2025 00:46:56 +0000 (0:00:07.557) 0:01:29.385 **** 2025-09-18 00:47:53.810077 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.810086 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.810096 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.810105 | orchestrator | 2025-09-18 00:47:53.810115 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-18 00:47:53.810125 | orchestrator | Thursday 18 September 2025 00:47:04 +0000 (0:00:07.436) 0:01:36.822 **** 2025-09-18 00:47:53.810134 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.810144 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.810154 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.810163 | orchestrator | 2025-09-18 00:47:53.810173 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-18 00:47:53.810183 | orchestrator | Thursday 18 September 2025 00:47:11 +0000 (0:00:07.599) 0:01:44.422 **** 2025-09-18 00:47:53.810192 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.810202 | orchestrator | 2025-09-18 00:47:53.810211 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-18 00:47:53.810221 | orchestrator | Thursday 18 September 2025 00:47:12 +0000 (0:00:00.384) 0:01:44.806 **** 2025-09-18 00:47:53.810231 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.810241 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.810250 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.810260 | orchestrator | 2025-09-18 00:47:53.810270 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-18 00:47:53.810285 | orchestrator | Thursday 18 September 2025 00:47:13 +0000 (0:00:00.900) 0:01:45.707 **** 2025-09-18 00:47:53.810295 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.810305 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.810314 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.810324 | orchestrator | 2025-09-18 00:47:53.810334 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-18 00:47:53.810344 | orchestrator | Thursday 18 September 2025 00:47:13 +0000 (0:00:00.717) 0:01:46.424 **** 2025-09-18 00:47:53.810353 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.810363 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.810387 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.810397 | orchestrator | 2025-09-18 00:47:53.810407 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-18 00:47:53.810417 | orchestrator | Thursday 18 September 2025 00:47:14 +0000 (0:00:00.794) 0:01:47.219 **** 2025-09-18 00:47:53.810426 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.810436 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.810446 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.810455 | orchestrator | 2025-09-18 00:47:53.810465 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-18 00:47:53.810475 | orchestrator | Thursday 18 September 2025 00:47:15 +0000 (0:00:00.662) 0:01:47.881 **** 2025-09-18 00:47:53.810485 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.810494 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.810510 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.810520 | orchestrator | 2025-09-18 00:47:53.810529 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-18 00:47:53.810539 | orchestrator | Thursday 18 September 2025 00:47:16 +0000 (0:00:01.158) 0:01:49.039 **** 2025-09-18 00:47:53.810549 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.810558 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.810568 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.810578 | orchestrator | 2025-09-18 00:47:53.810587 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-18 00:47:53.810597 | orchestrator | Thursday 18 September 2025 00:47:17 +0000 (0:00:00.887) 0:01:49.926 **** 2025-09-18 00:47:53.810607 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.810616 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.810626 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.810635 | orchestrator | 2025-09-18 00:47:53.810645 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-18 00:47:53.810655 | orchestrator | Thursday 18 September 2025 00:47:17 +0000 (0:00:00.286) 0:01:50.213 **** 2025-09-18 00:47:53.810665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810679 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810689 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810700 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810726 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810736 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810746 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810762 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810772 | orchestrator | 2025-09-18 00:47:53.810781 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-18 00:47:53.810791 | orchestrator | Thursday 18 September 2025 00:47:19 +0000 (0:00:01.491) 0:01:51.704 **** 2025-09-18 00:47:53.810801 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810811 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810821 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810901 | orchestrator | 2025-09-18 00:47:53.810911 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-18 00:47:53.810921 | orchestrator | Thursday 18 September 2025 00:47:23 +0000 (0:00:04.178) 0:01:55.882 **** 2025-09-18 00:47:53.810936 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810956 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810986 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.810996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.811006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.811016 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.811026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:47:53.811036 | orchestrator | 2025-09-18 00:47:53.811046 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.811056 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:03.355) 0:01:59.237 **** 2025-09-18 00:47:53.811065 | orchestrator | 2025-09-18 00:47:53.811075 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.811085 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:00.068) 0:01:59.306 **** 2025-09-18 00:47:53.811094 | orchestrator | 2025-09-18 00:47:53.811104 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 00:47:53.811113 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:00.075) 0:01:59.381 **** 2025-09-18 00:47:53.811123 | orchestrator | 2025-09-18 00:47:53.811133 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-18 00:47:53.811142 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:00.067) 0:01:59.449 **** 2025-09-18 00:47:53.811152 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.811162 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.811172 | orchestrator | 2025-09-18 00:47:53.811186 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-18 00:47:53.811196 | orchestrator | Thursday 18 September 2025 00:47:33 +0000 (0:00:06.272) 0:02:05.722 **** 2025-09-18 00:47:53.811206 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.811215 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.811225 | orchestrator | 2025-09-18 00:47:53.811235 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-18 00:47:53.811244 | orchestrator | Thursday 18 September 2025 00:47:39 +0000 (0:00:06.240) 0:02:11.962 **** 2025-09-18 00:47:53.811254 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:47:53.811269 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:47:53.811279 | orchestrator | 2025-09-18 00:47:53.811288 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-18 00:47:53.811298 | orchestrator | Thursday 18 September 2025 00:47:46 +0000 (0:00:06.790) 0:02:18.753 **** 2025-09-18 00:47:53.811308 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:47:53.811317 | orchestrator | 2025-09-18 00:47:53.811327 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-18 00:47:53.811336 | orchestrator | Thursday 18 September 2025 00:47:46 +0000 (0:00:00.151) 0:02:18.904 **** 2025-09-18 00:47:53.811346 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.811356 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.811365 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.811398 | orchestrator | 2025-09-18 00:47:53.811409 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-18 00:47:53.811419 | orchestrator | Thursday 18 September 2025 00:47:47 +0000 (0:00:00.762) 0:02:19.667 **** 2025-09-18 00:47:53.811428 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.811438 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.811447 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.811457 | orchestrator | 2025-09-18 00:47:53.811466 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-18 00:47:53.811476 | orchestrator | Thursday 18 September 2025 00:47:47 +0000 (0:00:00.567) 0:02:20.235 **** 2025-09-18 00:47:53.811493 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.811503 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.811513 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.811522 | orchestrator | 2025-09-18 00:47:53.811532 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-18 00:47:53.811541 | orchestrator | Thursday 18 September 2025 00:47:48 +0000 (0:00:00.764) 0:02:21.000 **** 2025-09-18 00:47:53.811551 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:47:53.811560 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:47:53.811570 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:47:53.811579 | orchestrator | 2025-09-18 00:47:53.811589 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-18 00:47:53.811599 | orchestrator | Thursday 18 September 2025 00:47:48 +0000 (0:00:00.573) 0:02:21.574 **** 2025-09-18 00:47:53.811608 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.811618 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.811627 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.811637 | orchestrator | 2025-09-18 00:47:53.811646 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-18 00:47:53.811656 | orchestrator | Thursday 18 September 2025 00:47:49 +0000 (0:00:00.781) 0:02:22.355 **** 2025-09-18 00:47:53.811666 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:47:53.811675 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:47:53.811685 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:47:53.811694 | orchestrator | 2025-09-18 00:47:53.811704 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:47:53.811713 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 00:47:53.811724 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-18 00:47:53.811733 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-18 00:47:53.811743 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:47:53.811753 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:47:53.811769 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:47:53.811778 | orchestrator | 2025-09-18 00:47:53.811788 | orchestrator | 2025-09-18 00:47:53.811797 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:47:53.811807 | orchestrator | Thursday 18 September 2025 00:47:50 +0000 (0:00:00.889) 0:02:23.244 **** 2025-09-18 00:47:53.811817 | orchestrator | =============================================================================== 2025-09-18 00:47:53.811826 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.31s 2025-09-18 00:47:53.811836 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.25s 2025-09-18 00:47:53.811846 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.39s 2025-09-18 00:47:53.811855 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.83s 2025-09-18 00:47:53.811865 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.68s 2025-09-18 00:47:53.811875 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2025-09-18 00:47:53.811884 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.06s 2025-09-18 00:47:53.811899 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.36s 2025-09-18 00:47:53.811909 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.86s 2025-09-18 00:47:53.811919 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.28s 2025-09-18 00:47:53.811928 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.03s 2025-09-18 00:47:53.811938 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.53s 2025-09-18 00:47:53.811947 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-09-18 00:47:53.811957 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.51s 2025-09-18 00:47:53.811967 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-09-18 00:47:53.811976 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.42s 2025-09-18 00:47:53.811986 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.26s 2025-09-18 00:47:53.811996 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.17s 2025-09-18 00:47:53.812006 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.16s 2025-09-18 00:47:53.812016 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.07s 2025-09-18 00:47:56.848851 | orchestrator | 2025-09-18 00:47:56 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:56.849711 | orchestrator | 2025-09-18 00:47:56 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:56.850320 | orchestrator | 2025-09-18 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:47:59.907866 | orchestrator | 2025-09-18 00:47:59 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:47:59.907975 | orchestrator | 2025-09-18 00:47:59 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:47:59.908461 | orchestrator | 2025-09-18 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:02.956495 | orchestrator | 2025-09-18 00:48:02 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:02.960597 | orchestrator | 2025-09-18 00:48:02 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:02.960754 | orchestrator | 2025-09-18 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:06.016319 | orchestrator | 2025-09-18 00:48:06 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:06.024262 | orchestrator | 2025-09-18 00:48:06 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:06.024302 | orchestrator | 2025-09-18 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:09.076641 | orchestrator | 2025-09-18 00:48:09 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:09.077452 | orchestrator | 2025-09-18 00:48:09 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:09.077480 | orchestrator | 2025-09-18 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:12.114837 | orchestrator | 2025-09-18 00:48:12 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:12.115609 | orchestrator | 2025-09-18 00:48:12 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:12.115645 | orchestrator | 2025-09-18 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:15.165792 | orchestrator | 2025-09-18 00:48:15 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:15.169141 | orchestrator | 2025-09-18 00:48:15 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:15.169174 | orchestrator | 2025-09-18 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:18.216496 | orchestrator | 2025-09-18 00:48:18 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:18.219286 | orchestrator | 2025-09-18 00:48:18 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:18.219329 | orchestrator | 2025-09-18 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:21.294787 | orchestrator | 2025-09-18 00:48:21 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:21.296041 | orchestrator | 2025-09-18 00:48:21 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:21.296411 | orchestrator | 2025-09-18 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:24.336640 | orchestrator | 2025-09-18 00:48:24 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:24.338524 | orchestrator | 2025-09-18 00:48:24 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:24.339303 | orchestrator | 2025-09-18 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:27.374888 | orchestrator | 2025-09-18 00:48:27 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:27.376545 | orchestrator | 2025-09-18 00:48:27 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:27.377542 | orchestrator | 2025-09-18 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:30.415620 | orchestrator | 2025-09-18 00:48:30 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:30.415724 | orchestrator | 2025-09-18 00:48:30 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:30.415740 | orchestrator | 2025-09-18 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:33.449282 | orchestrator | 2025-09-18 00:48:33 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:33.450106 | orchestrator | 2025-09-18 00:48:33 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:33.451635 | orchestrator | 2025-09-18 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:36.483778 | orchestrator | 2025-09-18 00:48:36 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:36.484105 | orchestrator | 2025-09-18 00:48:36 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:36.484134 | orchestrator | 2025-09-18 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:39.528109 | orchestrator | 2025-09-18 00:48:39 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:39.529669 | orchestrator | 2025-09-18 00:48:39 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:39.529702 | orchestrator | 2025-09-18 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:42.563878 | orchestrator | 2025-09-18 00:48:42 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:42.566312 | orchestrator | 2025-09-18 00:48:42 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:42.566528 | orchestrator | 2025-09-18 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:45.613890 | orchestrator | 2025-09-18 00:48:45 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:45.614535 | orchestrator | 2025-09-18 00:48:45 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:45.614575 | orchestrator | 2025-09-18 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:48.649630 | orchestrator | 2025-09-18 00:48:48 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:48.650071 | orchestrator | 2025-09-18 00:48:48 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:48.650201 | orchestrator | 2025-09-18 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:51.691062 | orchestrator | 2025-09-18 00:48:51 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:51.693148 | orchestrator | 2025-09-18 00:48:51 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:51.693220 | orchestrator | 2025-09-18 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:54.744530 | orchestrator | 2025-09-18 00:48:54 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:54.744886 | orchestrator | 2025-09-18 00:48:54 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:54.744910 | orchestrator | 2025-09-18 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:48:57.791068 | orchestrator | 2025-09-18 00:48:57 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:48:57.791717 | orchestrator | 2025-09-18 00:48:57 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:48:57.792061 | orchestrator | 2025-09-18 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:00.837139 | orchestrator | 2025-09-18 00:49:00 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:00.837245 | orchestrator | 2025-09-18 00:49:00 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:00.837260 | orchestrator | 2025-09-18 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:03.883244 | orchestrator | 2025-09-18 00:49:03 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:03.883678 | orchestrator | 2025-09-18 00:49:03 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:03.883741 | orchestrator | 2025-09-18 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:06.924748 | orchestrator | 2025-09-18 00:49:06 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:06.925126 | orchestrator | 2025-09-18 00:49:06 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:06.925157 | orchestrator | 2025-09-18 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:09.972520 | orchestrator | 2025-09-18 00:49:09 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:09.974374 | orchestrator | 2025-09-18 00:49:09 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:09.974508 | orchestrator | 2025-09-18 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:13.018415 | orchestrator | 2025-09-18 00:49:13 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:13.018593 | orchestrator | 2025-09-18 00:49:13 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:13.018627 | orchestrator | 2025-09-18 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:16.060673 | orchestrator | 2025-09-18 00:49:16 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:16.062509 | orchestrator | 2025-09-18 00:49:16 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:16.062639 | orchestrator | 2025-09-18 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:19.116236 | orchestrator | 2025-09-18 00:49:19 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:19.117569 | orchestrator | 2025-09-18 00:49:19 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:19.117787 | orchestrator | 2025-09-18 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:22.167919 | orchestrator | 2025-09-18 00:49:22 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:22.170648 | orchestrator | 2025-09-18 00:49:22 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:22.170765 | orchestrator | 2025-09-18 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:25.212643 | orchestrator | 2025-09-18 00:49:25 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:25.214902 | orchestrator | 2025-09-18 00:49:25 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:25.214970 | orchestrator | 2025-09-18 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:28.263909 | orchestrator | 2025-09-18 00:49:28 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:28.265353 | orchestrator | 2025-09-18 00:49:28 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:28.265388 | orchestrator | 2025-09-18 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:31.314093 | orchestrator | 2025-09-18 00:49:31 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:31.315637 | orchestrator | 2025-09-18 00:49:31 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:31.315937 | orchestrator | 2025-09-18 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:34.352912 | orchestrator | 2025-09-18 00:49:34 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:34.353051 | orchestrator | 2025-09-18 00:49:34 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:34.353067 | orchestrator | 2025-09-18 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:37.392074 | orchestrator | 2025-09-18 00:49:37 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:37.393860 | orchestrator | 2025-09-18 00:49:37 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:37.393899 | orchestrator | 2025-09-18 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:40.440684 | orchestrator | 2025-09-18 00:49:40 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:40.441022 | orchestrator | 2025-09-18 00:49:40 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:40.441707 | orchestrator | 2025-09-18 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:43.482014 | orchestrator | 2025-09-18 00:49:43 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:43.483684 | orchestrator | 2025-09-18 00:49:43 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:43.483910 | orchestrator | 2025-09-18 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:46.528261 | orchestrator | 2025-09-18 00:49:46 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:46.528580 | orchestrator | 2025-09-18 00:49:46 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:46.528604 | orchestrator | 2025-09-18 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:49.587494 | orchestrator | 2025-09-18 00:49:49 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:49.588526 | orchestrator | 2025-09-18 00:49:49 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:49.588566 | orchestrator | 2025-09-18 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:52.629626 | orchestrator | 2025-09-18 00:49:52 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:52.636711 | orchestrator | 2025-09-18 00:49:52 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:52.636770 | orchestrator | 2025-09-18 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:55.674276 | orchestrator | 2025-09-18 00:49:55 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:55.675938 | orchestrator | 2025-09-18 00:49:55 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:55.675965 | orchestrator | 2025-09-18 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:49:58.720470 | orchestrator | 2025-09-18 00:49:58 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:49:58.722006 | orchestrator | 2025-09-18 00:49:58 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:49:58.722412 | orchestrator | 2025-09-18 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:01.767147 | orchestrator | 2025-09-18 00:50:01 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:01.767475 | orchestrator | 2025-09-18 00:50:01 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:01.768046 | orchestrator | 2025-09-18 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:04.808958 | orchestrator | 2025-09-18 00:50:04 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:04.810495 | orchestrator | 2025-09-18 00:50:04 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:04.810529 | orchestrator | 2025-09-18 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:07.846181 | orchestrator | 2025-09-18 00:50:07 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:07.847900 | orchestrator | 2025-09-18 00:50:07 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:07.848375 | orchestrator | 2025-09-18 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:10.898834 | orchestrator | 2025-09-18 00:50:10 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:10.900156 | orchestrator | 2025-09-18 00:50:10 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:10.900181 | orchestrator | 2025-09-18 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:13.939117 | orchestrator | 2025-09-18 00:50:13 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:13.940541 | orchestrator | 2025-09-18 00:50:13 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:13.941463 | orchestrator | 2025-09-18 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:16.982767 | orchestrator | 2025-09-18 00:50:16 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:16.982873 | orchestrator | 2025-09-18 00:50:16 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:16.982888 | orchestrator | 2025-09-18 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:20.042105 | orchestrator | 2025-09-18 00:50:20 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:20.043012 | orchestrator | 2025-09-18 00:50:20 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:20.043389 | orchestrator | 2025-09-18 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:23.082159 | orchestrator | 2025-09-18 00:50:23 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:23.083560 | orchestrator | 2025-09-18 00:50:23 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:23.083632 | orchestrator | 2025-09-18 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:26.120429 | orchestrator | 2025-09-18 00:50:26 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:26.122197 | orchestrator | 2025-09-18 00:50:26 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:26.122362 | orchestrator | 2025-09-18 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:29.157549 | orchestrator | 2025-09-18 00:50:29 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:29.158002 | orchestrator | 2025-09-18 00:50:29 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:29.160134 | orchestrator | 2025-09-18 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:32.201704 | orchestrator | 2025-09-18 00:50:32 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:32.202107 | orchestrator | 2025-09-18 00:50:32 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:32.202323 | orchestrator | 2025-09-18 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:35.254387 | orchestrator | 2025-09-18 00:50:35 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:35.256223 | orchestrator | 2025-09-18 00:50:35 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:35.256254 | orchestrator | 2025-09-18 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:38.301466 | orchestrator | 2025-09-18 00:50:38 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:38.302585 | orchestrator | 2025-09-18 00:50:38 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:38.302618 | orchestrator | 2025-09-18 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:41.346343 | orchestrator | 2025-09-18 00:50:41 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:41.347478 | orchestrator | 2025-09-18 00:50:41 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state STARTED 2025-09-18 00:50:41.348060 | orchestrator | 2025-09-18 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:44.392687 | orchestrator | 2025-09-18 00:50:44 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:44.409091 | orchestrator | 2025-09-18 00:50:44 | INFO  | Task e2df5647-8a70-4540-ab1f-3f6b867c3b79 is in state SUCCESS 2025-09-18 00:50:44.412944 | orchestrator | 2025-09-18 00:50:44.413064 | orchestrator | 2025-09-18 00:50:44.413118 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:50:44.413134 | orchestrator | 2025-09-18 00:50:44.413154 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:50:44.413327 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.719) 0:00:00.719 **** 2025-09-18 00:50:44.413350 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.413372 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.413392 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.413412 | orchestrator | 2025-09-18 00:50:44.413432 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:50:44.413452 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.526) 0:00:01.246 **** 2025-09-18 00:50:44.413472 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-18 00:50:44.413492 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-18 00:50:44.413511 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-18 00:50:44.413531 | orchestrator | 2025-09-18 00:50:44.413544 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-18 00:50:44.413556 | orchestrator | 2025-09-18 00:50:44.413570 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-18 00:50:44.413583 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:00.977) 0:00:02.223 **** 2025-09-18 00:50:44.413596 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.413609 | orchestrator | 2025-09-18 00:50:44.413622 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-18 00:50:44.413635 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:01.000) 0:00:03.223 **** 2025-09-18 00:50:44.413648 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.413660 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.413672 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.413692 | orchestrator | 2025-09-18 00:50:44.413714 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-18 00:50:44.413735 | orchestrator | Thursday 18 September 2025 00:44:23 +0000 (0:00:00.887) 0:00:04.111 **** 2025-09-18 00:50:44.413872 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.413888 | orchestrator | 2025-09-18 00:50:44.413902 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-18 00:50:44.413922 | orchestrator | Thursday 18 September 2025 00:44:24 +0000 (0:00:01.530) 0:00:05.641 **** 2025-09-18 00:50:44.413942 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.413960 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.413979 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.413999 | orchestrator | 2025-09-18 00:50:44.414014 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-18 00:50:44.414105 | orchestrator | Thursday 18 September 2025 00:44:25 +0000 (0:00:00.773) 0:00:06.415 **** 2025-09-18 00:50:44.414123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414154 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414259 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 00:50:44.414270 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 00:50:44.414400 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 00:50:44.414424 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 00:50:44.414444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 00:50:44.414464 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 00:50:44.414484 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 00:50:44.414503 | orchestrator | 2025-09-18 00:50:44.414520 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 00:50:44.414540 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:04.447) 0:00:10.862 **** 2025-09-18 00:50:44.414559 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-18 00:50:44.414579 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-18 00:50:44.414599 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-18 00:50:44.414617 | orchestrator | 2025-09-18 00:50:44.414637 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 00:50:44.414656 | orchestrator | Thursday 18 September 2025 00:44:30 +0000 (0:00:00.751) 0:00:11.613 **** 2025-09-18 00:50:44.414676 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-18 00:50:44.414694 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-18 00:50:44.414709 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-18 00:50:44.414720 | orchestrator | 2025-09-18 00:50:44.414731 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 00:50:44.414798 | orchestrator | Thursday 18 September 2025 00:44:32 +0000 (0:00:01.917) 0:00:13.530 **** 2025-09-18 00:50:44.414821 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-18 00:50:44.415059 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.415100 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-18 00:50:44.415121 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.415139 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-18 00:50:44.415159 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.415177 | orchestrator | 2025-09-18 00:50:44.415196 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-18 00:50:44.415235 | orchestrator | Thursday 18 September 2025 00:44:33 +0000 (0:00:00.609) 0:00:14.140 **** 2025-09-18 00:50:44.415259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.415414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.415436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.415448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.415459 | orchestrator | 2025-09-18 00:50:44.415470 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-18 00:50:44.415482 | orchestrator | Thursday 18 September 2025 00:44:35 +0000 (0:00:02.309) 0:00:16.449 **** 2025-09-18 00:50:44.415493 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.415634 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.415647 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.415658 | orchestrator | 2025-09-18 00:50:44.415670 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-18 00:50:44.415690 | orchestrator | Thursday 18 September 2025 00:44:37 +0000 (0:00:01.386) 0:00:17.836 **** 2025-09-18 00:50:44.415708 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-18 00:50:44.415728 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-18 00:50:44.415757 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-18 00:50:44.415778 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-18 00:50:44.415797 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-18 00:50:44.415815 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-18 00:50:44.415826 | orchestrator | 2025-09-18 00:50:44.415837 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-18 00:50:44.415848 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:01.679) 0:00:19.515 **** 2025-09-18 00:50:44.415859 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.415869 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.415880 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.415891 | orchestrator | 2025-09-18 00:50:44.415902 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-18 00:50:44.415912 | orchestrator | Thursday 18 September 2025 00:44:41 +0000 (0:00:02.510) 0:00:22.026 **** 2025-09-18 00:50:44.415923 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.415934 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.415945 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.415955 | orchestrator | 2025-09-18 00:50:44.415966 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-18 00:50:44.415977 | orchestrator | Thursday 18 September 2025 00:44:43 +0000 (0:00:02.325) 0:00:24.352 **** 2025-09-18 00:50:44.415988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.416020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.416032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416056 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.416068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.416085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.416097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416128 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.416147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.416159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.416170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416198 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.416209 | orchestrator | 2025-09-18 00:50:44.416220 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-18 00:50:44.416231 | orchestrator | Thursday 18 September 2025 00:44:44 +0000 (0:00:00.634) 0:00:24.986 **** 2025-09-18 00:50:44.416242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.416476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e', '__omit_place_holder__6d569b8664bfcebc5155774c4f473e011d45506e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 00:50:44.416487 | orchestrator | 2025-09-18 00:50:44.416499 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-18 00:50:44.416510 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:02.629) 0:00:27.616 **** 2025-09-18 00:50:44.416527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.416703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.416729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.416742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.416753 | orchestrator | 2025-09-18 00:50:44.416763 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-18 00:50:44.416773 | orchestrator | Thursday 18 September 2025 00:44:50 +0000 (0:00:03.820) 0:00:31.437 **** 2025-09-18 00:50:44.416783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 00:50:44.416793 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 00:50:44.416802 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 00:50:44.416812 | orchestrator | 2025-09-18 00:50:44.416822 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-18 00:50:44.416831 | orchestrator | Thursday 18 September 2025 00:44:54 +0000 (0:00:03.908) 0:00:35.345 **** 2025-09-18 00:50:44.416841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 00:50:44.416851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 00:50:44.416861 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 00:50:44.416870 | orchestrator | 2025-09-18 00:50:44.416892 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-18 00:50:44.416902 | orchestrator | Thursday 18 September 2025 00:44:59 +0000 (0:00:05.137) 0:00:40.482 **** 2025-09-18 00:50:44.416912 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.416922 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.416931 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.416941 | orchestrator | 2025-09-18 00:50:44.416950 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-18 00:50:44.416960 | orchestrator | Thursday 18 September 2025 00:45:00 +0000 (0:00:00.958) 0:00:41.440 **** 2025-09-18 00:50:44.416970 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 00:50:44.416980 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 00:50:44.417006 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 00:50:44.417016 | orchestrator | 2025-09-18 00:50:44.417026 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-18 00:50:44.417036 | orchestrator | Thursday 18 September 2025 00:45:05 +0000 (0:00:04.962) 0:00:46.403 **** 2025-09-18 00:50:44.417046 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 00:50:44.417062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 00:50:44.417072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 00:50:44.417106 | orchestrator | 2025-09-18 00:50:44.417116 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-18 00:50:44.417126 | orchestrator | Thursday 18 September 2025 00:45:08 +0000 (0:00:02.665) 0:00:49.069 **** 2025-09-18 00:50:44.417136 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-18 00:50:44.417145 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-18 00:50:44.417155 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-18 00:50:44.417165 | orchestrator | 2025-09-18 00:50:44.417174 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-18 00:50:44.417184 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:02.397) 0:00:51.466 **** 2025-09-18 00:50:44.417194 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-18 00:50:44.417203 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-18 00:50:44.417223 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-18 00:50:44.417234 | orchestrator | 2025-09-18 00:50:44.417249 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-18 00:50:44.417259 | orchestrator | Thursday 18 September 2025 00:45:12 +0000 (0:00:01.834) 0:00:53.301 **** 2025-09-18 00:50:44.417268 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.417278 | orchestrator | 2025-09-18 00:50:44.417402 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-18 00:50:44.417444 | orchestrator | Thursday 18 September 2025 00:45:13 +0000 (0:00:00.744) 0:00:54.045 **** 2025-09-18 00:50:44.417455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.417536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.417547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.417557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.417567 | orchestrator | 2025-09-18 00:50:44.417577 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-18 00:50:44.417587 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:04.530) 0:00:58.576 **** 2025-09-18 00:50:44.417603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417641 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.417655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417685 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.417696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417738 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.417748 | orchestrator | 2025-09-18 00:50:44.417758 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-18 00:50:44.417768 | orchestrator | Thursday 18 September 2025 00:45:18 +0000 (0:00:00.625) 0:00:59.202 **** 2025-09-18 00:50:44.417778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417813 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.417848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.417883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.417941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.417966 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.417977 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.417986 | orchestrator | 2025-09-18 00:50:44.418095 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-18 00:50:44.418107 | orchestrator | Thursday 18 September 2025 00:45:19 +0000 (0:00:00.720) 0:00:59.922 **** 2025-09-18 00:50:44.418117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418189 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.418199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418230 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.418240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418335 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.418345 | orchestrator | 2025-09-18 00:50:44.418354 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-18 00:50:44.418364 | orchestrator | Thursday 18 September 2025 00:45:19 +0000 (0:00:00.634) 0:01:00.556 **** 2025-09-18 00:50:44.418374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418404 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.418419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418456 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.418471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418547 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.418556 | orchestrator | 2025-09-18 00:50:44.418566 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-18 00:50:44.418605 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.893) 0:01:01.450 **** 2025-09-18 00:50:44.418621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418725 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.418742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418772 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.418782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.418797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.418814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.418824 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.418834 | orchestrator | 2025-09-18 00:50:44.418843 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-18 00:50:44.418853 | orchestrator | Thursday 18 September 2025 00:45:21 +0000 (0:00:00.704) 0:01:02.154 **** 2025-09-18 00:50:44.418863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420534 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.420547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420642 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.420653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420705 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.420715 | orchestrator | 2025-09-18 00:50:44.420725 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-18 00:50:44.420736 | orchestrator | Thursday 18 September 2025 00:45:22 +0000 (0:00:00.863) 0:01:03.018 **** 2025-09-18 00:50:44.420747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420790 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.420800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420842 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.420852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420893 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.420903 | orchestrator | 2025-09-18 00:50:44.420913 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-18 00:50:44.420923 | orchestrator | Thursday 18 September 2025 00:45:22 +0000 (0:00:00.489) 0:01:03.507 **** 2025-09-18 00:50:44.420933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.420954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.420964 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.420980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.420991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.421008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.421018 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.421032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 00:50:44.421043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 00:50:44.421053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 00:50:44.421063 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.421072 | orchestrator | 2025-09-18 00:50:44.421082 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-18 00:50:44.421092 | orchestrator | Thursday 18 September 2025 00:45:23 +0000 (0:00:00.710) 0:01:04.218 **** 2025-09-18 00:50:44.421102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 00:50:44.421112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 00:50:44.421127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 00:50:44.421137 | orchestrator | 2025-09-18 00:50:44.421146 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-18 00:50:44.421156 | orchestrator | Thursday 18 September 2025 00:45:25 +0000 (0:00:01.768) 0:01:05.987 **** 2025-09-18 00:50:44.421165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 00:50:44.421175 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 00:50:44.421187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 00:50:44.421209 | orchestrator | 2025-09-18 00:50:44.421219 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-18 00:50:44.421229 | orchestrator | Thursday 18 September 2025 00:45:26 +0000 (0:00:01.523) 0:01:07.510 **** 2025-09-18 00:50:44.421238 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 00:50:44.421248 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 00:50:44.421257 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 00:50:44.421271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 00:50:44.421314 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.421325 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 00:50:44.421335 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.421345 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 00:50:44.421354 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.421363 | orchestrator | 2025-09-18 00:50:44.421373 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-18 00:50:44.421382 | orchestrator | Thursday 18 September 2025 00:45:27 +0000 (0:00:00.999) 0:01:08.509 **** 2025-09-18 00:50:44.421397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 00:50:44.421486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.421501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.421511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 00:50:44.421521 | orchestrator | 2025-09-18 00:50:44.421531 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-18 00:50:44.421541 | orchestrator | Thursday 18 September 2025 00:45:30 +0000 (0:00:02.392) 0:01:10.902 **** 2025-09-18 00:50:44.421551 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.421561 | orchestrator | 2025-09-18 00:50:44.421570 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-18 00:50:44.421580 | orchestrator | Thursday 18 September 2025 00:45:30 +0000 (0:00:00.471) 0:01:11.373 **** 2025-09-18 00:50:44.421591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 00:50:44.421615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 00:50:44.421661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 00:50:44.421720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421754 | orchestrator | 2025-09-18 00:50:44.421764 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-18 00:50:44.421774 | orchestrator | Thursday 18 September 2025 00:45:34 +0000 (0:00:04.277) 0:01:15.651 **** 2025-09-18 00:50:44.421784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 00:50:44.421806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421846 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.421864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 00:50:44.421878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421921 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.421938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 00:50:44.421949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.421959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.421991 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.422002 | orchestrator | 2025-09-18 00:50:44.422011 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-18 00:50:44.422058 | orchestrator | Thursday 18 September 2025 00:45:35 +0000 (0:00:00.833) 0:01:16.484 **** 2025-09-18 00:50:44.422076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422104 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.422116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422151 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.422161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 00:50:44.422170 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.422180 | orchestrator | 2025-09-18 00:50:44.422196 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-18 00:50:44.422210 | orchestrator | Thursday 18 September 2025 00:45:36 +0000 (0:00:01.184) 0:01:17.669 **** 2025-09-18 00:50:44.422223 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.422233 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.422242 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.422251 | orchestrator | 2025-09-18 00:50:44.422261 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-18 00:50:44.422271 | orchestrator | Thursday 18 September 2025 00:45:38 +0000 (0:00:01.390) 0:01:19.059 **** 2025-09-18 00:50:44.422280 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.422309 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.422318 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.422327 | orchestrator | 2025-09-18 00:50:44.422337 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-18 00:50:44.422347 | orchestrator | Thursday 18 September 2025 00:45:40 +0000 (0:00:02.485) 0:01:21.544 **** 2025-09-18 00:50:44.422356 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.422365 | orchestrator | 2025-09-18 00:50:44.422375 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-18 00:50:44.422384 | orchestrator | Thursday 18 September 2025 00:45:42 +0000 (0:00:01.149) 0:01:22.694 **** 2025-09-18 00:50:44.422395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.422411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.422481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.422492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422544 | orchestrator | 2025-09-18 00:50:44.422553 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-18 00:50:44.422563 | orchestrator | Thursday 18 September 2025 00:45:45 +0000 (0:00:03.485) 0:01:26.180 **** 2025-09-18 00:50:44.422579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.422590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422616 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.422631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.422641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422661 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.422676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.422690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.422721 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.422731 | orchestrator | 2025-09-18 00:50:44.422740 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-18 00:50:44.422750 | orchestrator | Thursday 18 September 2025 00:45:46 +0000 (0:00:00.590) 0:01:26.770 **** 2025-09-18 00:50:44.422764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422785 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.422794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422814 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.422823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 00:50:44.422842 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.422852 | orchestrator | 2025-09-18 00:50:44.422861 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-18 00:50:44.422870 | orchestrator | Thursday 18 September 2025 00:45:47 +0000 (0:00:01.014) 0:01:27.784 **** 2025-09-18 00:50:44.422880 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.422889 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.422899 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.422908 | orchestrator | 2025-09-18 00:50:44.422917 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-18 00:50:44.422927 | orchestrator | Thursday 18 September 2025 00:45:48 +0000 (0:00:01.465) 0:01:29.250 **** 2025-09-18 00:50:44.422936 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.422946 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.422955 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.422972 | orchestrator | 2025-09-18 00:50:44.422991 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-18 00:50:44.423005 | orchestrator | Thursday 18 September 2025 00:45:50 +0000 (0:00:02.146) 0:01:31.397 **** 2025-09-18 00:50:44.423015 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423024 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.423034 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423043 | orchestrator | 2025-09-18 00:50:44.423053 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-18 00:50:44.423071 | orchestrator | Thursday 18 September 2025 00:45:51 +0000 (0:00:00.317) 0:01:31.714 **** 2025-09-18 00:50:44.423081 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.423090 | orchestrator | 2025-09-18 00:50:44.423100 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-18 00:50:44.423109 | orchestrator | Thursday 18 September 2025 00:45:51 +0000 (0:00:00.829) 0:01:32.544 **** 2025-09-18 00:50:44.423119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 00:50:44.423134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 00:50:44.423145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 00:50:44.423160 | orchestrator | 2025-09-18 00:50:44.423172 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-18 00:50:44.423182 | orchestrator | Thursday 18 September 2025 00:45:54 +0000 (0:00:02.628) 0:01:35.173 **** 2025-09-18 00:50:44.423197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 00:50:44.423214 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.423224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 00:50:44.423234 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 00:50:44.423254 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423264 | orchestrator | 2025-09-18 00:50:44.423273 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-18 00:50:44.423329 | orchestrator | Thursday 18 September 2025 00:45:56 +0000 (0:00:01.760) 0:01:36.933 **** 2025-09-18 00:50:44.423343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423366 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.423376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423402 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 00:50:44.423438 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423447 | orchestrator | 2025-09-18 00:50:44.423456 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-18 00:50:44.423466 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:01.797) 0:01:38.731 **** 2025-09-18 00:50:44.423476 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423485 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.423520 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423531 | orchestrator | 2025-09-18 00:50:44.423541 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-18 00:50:44.423550 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:00.746) 0:01:39.478 **** 2025-09-18 00:50:44.423560 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423570 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.423579 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423588 | orchestrator | 2025-09-18 00:50:44.423598 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-18 00:50:44.423608 | orchestrator | Thursday 18 September 2025 00:46:00 +0000 (0:00:01.238) 0:01:40.717 **** 2025-09-18 00:50:44.423617 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.423626 | orchestrator | 2025-09-18 00:50:44.423636 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-18 00:50:44.423646 | orchestrator | Thursday 18 September 2025 00:46:00 +0000 (0:00:00.765) 0:01:41.482 **** 2025-09-18 00:50:44.423660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.423671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.423724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.423781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423815 | orchestrator | 2025-09-18 00:50:44.423825 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-18 00:50:44.423835 | orchestrator | Thursday 18 September 2025 00:46:05 +0000 (0:00:04.347) 0:01:45.830 **** 2025-09-18 00:50:44.423851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.423861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423896 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.423910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.423926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423962 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.423972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.423982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.423996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424021 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.424031 | orchestrator | 2025-09-18 00:50:44.424041 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-18 00:50:44.424051 | orchestrator | Thursday 18 September 2025 00:46:06 +0000 (0:00:01.265) 0:01:47.095 **** 2025-09-18 00:50:44.424060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424085 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.424095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424115 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.424124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 00:50:44.424144 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.424153 | orchestrator | 2025-09-18 00:50:44.424163 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-18 00:50:44.424172 | orchestrator | Thursday 18 September 2025 00:46:07 +0000 (0:00:01.241) 0:01:48.337 **** 2025-09-18 00:50:44.424182 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.424191 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.424200 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.424210 | orchestrator | 2025-09-18 00:50:44.424219 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-18 00:50:44.424229 | orchestrator | Thursday 18 September 2025 00:46:09 +0000 (0:00:01.430) 0:01:49.768 **** 2025-09-18 00:50:44.424247 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.424256 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.424266 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.424275 | orchestrator | 2025-09-18 00:50:44.424333 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-18 00:50:44.424344 | orchestrator | Thursday 18 September 2025 00:46:11 +0000 (0:00:02.018) 0:01:51.787 **** 2025-09-18 00:50:44.424353 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.424363 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.424372 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.424382 | orchestrator | 2025-09-18 00:50:44.424391 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-18 00:50:44.424405 | orchestrator | Thursday 18 September 2025 00:46:11 +0000 (0:00:00.484) 0:01:52.271 **** 2025-09-18 00:50:44.424415 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.424425 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.424434 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.424443 | orchestrator | 2025-09-18 00:50:44.424453 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-18 00:50:44.424463 | orchestrator | Thursday 18 September 2025 00:46:11 +0000 (0:00:00.339) 0:01:52.610 **** 2025-09-18 00:50:44.424472 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.424481 | orchestrator | 2025-09-18 00:50:44.424491 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-18 00:50:44.424500 | orchestrator | Thursday 18 September 2025 00:46:12 +0000 (0:00:00.760) 0:01:53.370 **** 2025-09-18 00:50:44.424510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 00:50:44.424527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.424537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 00:50:44.424553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.424578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 00:50:44.424700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.424715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424770 | orchestrator | 2025-09-18 00:50:44.424779 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-18 00:50:44.424789 | orchestrator | Thursday 18 September 2025 00:46:16 +0000 (0:00:03.736) 0:01:57.106 **** 2025-09-18 00:50:44.424805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 00:50:44.424821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.424831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 00:50:44.424898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424908 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.424918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.424932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.424952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425532 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.425543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 00:50:44.425561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 00:50:44.425571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.425644 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.425654 | orchestrator | 2025-09-18 00:50:44.425664 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-18 00:50:44.425674 | orchestrator | Thursday 18 September 2025 00:46:17 +0000 (0:00:00.833) 0:01:57.939 **** 2025-09-18 00:50:44.425684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425705 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.425715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425747 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.425759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 00:50:44.425779 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.425788 | orchestrator | 2025-09-18 00:50:44.425798 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-18 00:50:44.425808 | orchestrator | Thursday 18 September 2025 00:46:18 +0000 (0:00:00.941) 0:01:58.881 **** 2025-09-18 00:50:44.425817 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.425828 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.425844 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.425854 | orchestrator | 2025-09-18 00:50:44.425863 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-18 00:50:44.425873 | orchestrator | Thursday 18 September 2025 00:46:20 +0000 (0:00:01.800) 0:02:00.682 **** 2025-09-18 00:50:44.425889 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.425898 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.425907 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.425917 | orchestrator | 2025-09-18 00:50:44.425926 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-18 00:50:44.425936 | orchestrator | Thursday 18 September 2025 00:46:21 +0000 (0:00:01.842) 0:02:02.525 **** 2025-09-18 00:50:44.425945 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.425954 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.425964 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.425973 | orchestrator | 2025-09-18 00:50:44.425983 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-18 00:50:44.425992 | orchestrator | Thursday 18 September 2025 00:46:22 +0000 (0:00:00.555) 0:02:03.080 **** 2025-09-18 00:50:44.426002 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.426011 | orchestrator | 2025-09-18 00:50:44.426066 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-18 00:50:44.426077 | orchestrator | Thursday 18 September 2025 00:46:23 +0000 (0:00:00.811) 0:02:03.891 **** 2025-09-18 00:50:44.426099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:50:44.426119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:50:44.426183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:50:44.426236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426249 | orchestrator | 2025-09-18 00:50:44.426261 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-18 00:50:44.426279 | orchestrator | Thursday 18 September 2025 00:46:27 +0000 (0:00:04.304) 0:02:08.196 **** 2025-09-18 00:50:44.426327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:50:44.426342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:50:44.426359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426379 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.426398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426411 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.426427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:50:44.426458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.426471 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.426482 | orchestrator | 2025-09-18 00:50:44.426499 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-18 00:50:44.426508 | orchestrator | Thursday 18 September 2025 00:46:30 +0000 (0:00:03.281) 0:02:11.478 **** 2025-09-18 00:50:44.426525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426581 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.426591 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.426601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 00:50:44.426627 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.426637 | orchestrator | 2025-09-18 00:50:44.426647 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-18 00:50:44.426656 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:03.347) 0:02:14.826 **** 2025-09-18 00:50:44.426666 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.426676 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.426685 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.426695 | orchestrator | 2025-09-18 00:50:44.426704 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-18 00:50:44.426714 | orchestrator | Thursday 18 September 2025 00:46:35 +0000 (0:00:01.297) 0:02:16.124 **** 2025-09-18 00:50:44.426723 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.426733 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.426742 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.426752 | orchestrator | 2025-09-18 00:50:44.426761 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-18 00:50:44.426771 | orchestrator | Thursday 18 September 2025 00:46:37 +0000 (0:00:01.886) 0:02:18.010 **** 2025-09-18 00:50:44.426781 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.426790 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.426805 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.426814 | orchestrator | 2025-09-18 00:50:44.426824 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-18 00:50:44.426834 | orchestrator | Thursday 18 September 2025 00:46:37 +0000 (0:00:00.420) 0:02:18.430 **** 2025-09-18 00:50:44.426843 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.426853 | orchestrator | 2025-09-18 00:50:44.426863 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-18 00:50:44.426872 | orchestrator | Thursday 18 September 2025 00:46:38 +0000 (0:00:00.773) 0:02:19.204 **** 2025-09-18 00:50:44.426890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 00:50:44.426901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 00:50:44.426911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 00:50:44.426921 | orchestrator | 2025-09-18 00:50:44.426931 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-18 00:50:44.426941 | orchestrator | Thursday 18 September 2025 00:46:42 +0000 (0:00:03.536) 0:02:22.741 **** 2025-09-18 00:50:44.426956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 00:50:44.426967 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.426977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 00:50:44.426993 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.427003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 00:50:44.427012 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.427022 | orchestrator | 2025-09-18 00:50:44.427031 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-18 00:50:44.427041 | orchestrator | Thursday 18 September 2025 00:46:42 +0000 (0:00:00.687) 0:02:23.428 **** 2025-09-18 00:50:44.427055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427075 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.427085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427104 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.427113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 00:50:44.427133 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.427142 | orchestrator | 2025-09-18 00:50:44.427152 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-18 00:50:44.427162 | orchestrator | Thursday 18 September 2025 00:46:43 +0000 (0:00:00.686) 0:02:24.115 **** 2025-09-18 00:50:44.427171 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.427181 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.427190 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.427200 | orchestrator | 2025-09-18 00:50:44.427209 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-18 00:50:44.427219 | orchestrator | Thursday 18 September 2025 00:46:44 +0000 (0:00:01.425) 0:02:25.541 **** 2025-09-18 00:50:44.427228 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.427238 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.427247 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.427257 | orchestrator | 2025-09-18 00:50:44.427266 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-18 00:50:44.427298 | orchestrator | Thursday 18 September 2025 00:46:46 +0000 (0:00:02.099) 0:02:27.640 **** 2025-09-18 00:50:44.427309 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.427318 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.427333 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.427343 | orchestrator | 2025-09-18 00:50:44.427352 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-18 00:50:44.427362 | orchestrator | Thursday 18 September 2025 00:46:47 +0000 (0:00:00.556) 0:02:28.197 **** 2025-09-18 00:50:44.427372 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.427381 | orchestrator | 2025-09-18 00:50:44.427391 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-18 00:50:44.427401 | orchestrator | Thursday 18 September 2025 00:46:48 +0000 (0:00:00.941) 0:02:29.139 **** 2025-09-18 00:50:44.427412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:50:44.427448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:50:44.427472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:50:44.427484 | orchestrator | 2025-09-18 00:50:44.427493 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-18 00:50:44.427503 | orchestrator | Thursday 18 September 2025 00:46:52 +0000 (0:00:04.135) 0:02:33.274 **** 2025-09-18 00:50:44.427519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:50:44.427537 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.427553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:50:44.427564 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.427580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:50:44.427597 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.427606 | orchestrator | 2025-09-18 00:50:44.427615 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-18 00:50:44.427625 | orchestrator | Thursday 18 September 2025 00:46:53 +0000 (0:00:01.215) 0:02:34.489 **** 2025-09-18 00:50:44.427635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 00:50:44.427697 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.427707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 00:50:44.427762 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.427771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 00:50:44.427801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 00:50:44.427811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 00:50:44.427820 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.427830 | orchestrator | 2025-09-18 00:50:44.427844 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-18 00:50:44.427854 | orchestrator | Thursday 18 September 2025 00:46:54 +0000 (0:00:01.017) 0:02:35.506 **** 2025-09-18 00:50:44.427863 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.427873 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.427882 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.427892 | orchestrator | 2025-09-18 00:50:44.427901 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-18 00:50:44.427916 | orchestrator | Thursday 18 September 2025 00:46:56 +0000 (0:00:01.397) 0:02:36.903 **** 2025-09-18 00:50:44.427926 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.427936 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.427945 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.427954 | orchestrator | 2025-09-18 00:50:44.427964 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-18 00:50:44.427973 | orchestrator | Thursday 18 September 2025 00:46:58 +0000 (0:00:02.134) 0:02:39.038 **** 2025-09-18 00:50:44.427983 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.427992 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.428002 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.428011 | orchestrator | 2025-09-18 00:50:44.428021 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-18 00:50:44.428030 | orchestrator | Thursday 18 September 2025 00:46:58 +0000 (0:00:00.350) 0:02:39.388 **** 2025-09-18 00:50:44.428039 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.428049 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.428058 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.428068 | orchestrator | 2025-09-18 00:50:44.428077 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-18 00:50:44.428087 | orchestrator | Thursday 18 September 2025 00:46:59 +0000 (0:00:00.633) 0:02:40.022 **** 2025-09-18 00:50:44.428096 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.428106 | orchestrator | 2025-09-18 00:50:44.428115 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-18 00:50:44.428125 | orchestrator | Thursday 18 September 2025 00:47:00 +0000 (0:00:00.974) 0:02:40.996 **** 2025-09-18 00:50:44.428140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:50:44.428151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:50:44.428197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:50:44.428235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428263 | orchestrator | 2025-09-18 00:50:44.428277 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-18 00:50:44.428341 | orchestrator | Thursday 18 September 2025 00:47:04 +0000 (0:00:04.179) 0:02:45.175 **** 2025-09-18 00:50:44.428352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:50:44.428363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428390 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.428401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:50:44.428418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428443 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.428453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:50:44.428469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:50:44.428480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:50:44.428489 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.428499 | orchestrator | 2025-09-18 00:50:44.428509 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-18 00:50:44.428524 | orchestrator | Thursday 18 September 2025 00:47:05 +0000 (0:00:00.894) 0:02:46.069 **** 2025-09-18 00:50:44.428534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428555 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.428565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428589 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.428599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 00:50:44.428619 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.428629 | orchestrator | 2025-09-18 00:50:44.428639 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-18 00:50:44.428648 | orchestrator | Thursday 18 September 2025 00:47:06 +0000 (0:00:00.933) 0:02:47.003 **** 2025-09-18 00:50:44.428658 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.428668 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.428677 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.428687 | orchestrator | 2025-09-18 00:50:44.428696 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-18 00:50:44.428706 | orchestrator | Thursday 18 September 2025 00:47:07 +0000 (0:00:01.296) 0:02:48.299 **** 2025-09-18 00:50:44.428716 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.428725 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.428735 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.428744 | orchestrator | 2025-09-18 00:50:44.428754 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-18 00:50:44.428763 | orchestrator | Thursday 18 September 2025 00:47:09 +0000 (0:00:02.212) 0:02:50.512 **** 2025-09-18 00:50:44.428773 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.428783 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.428792 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.428802 | orchestrator | 2025-09-18 00:50:44.428811 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-18 00:50:44.428821 | orchestrator | Thursday 18 September 2025 00:47:10 +0000 (0:00:00.563) 0:02:51.076 **** 2025-09-18 00:50:44.428830 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.428838 | orchestrator | 2025-09-18 00:50:44.428846 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-18 00:50:44.428854 | orchestrator | Thursday 18 September 2025 00:47:11 +0000 (0:00:01.035) 0:02:52.112 **** 2025-09-18 00:50:44.428867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 00:50:44.428881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.428894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 00:50:44.428902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.428911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 00:50:44.428932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.428940 | orchestrator | 2025-09-18 00:50:44.428948 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-18 00:50:44.428956 | orchestrator | Thursday 18 September 2025 00:47:15 +0000 (0:00:03.667) 0:02:55.779 **** 2025-09-18 00:50:44.428965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 00:50:44.428977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.428985 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.428994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 00:50:44.429367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429392 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.429401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 00:50:44.429413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429425 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.429433 | orchestrator | 2025-09-18 00:50:44.429442 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-18 00:50:44.429450 | orchestrator | Thursday 18 September 2025 00:47:16 +0000 (0:00:01.139) 0:02:56.919 **** 2025-09-18 00:50:44.429463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429479 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.429487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429503 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.429511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 00:50:44.429532 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.429540 | orchestrator | 2025-09-18 00:50:44.429548 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-18 00:50:44.429556 | orchestrator | Thursday 18 September 2025 00:47:17 +0000 (0:00:00.977) 0:02:57.896 **** 2025-09-18 00:50:44.429563 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.429571 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.429579 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.429587 | orchestrator | 2025-09-18 00:50:44.429594 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-18 00:50:44.429602 | orchestrator | Thursday 18 September 2025 00:47:18 +0000 (0:00:01.409) 0:02:59.306 **** 2025-09-18 00:50:44.429610 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.429618 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.429626 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.429633 | orchestrator | 2025-09-18 00:50:44.429641 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-18 00:50:44.429649 | orchestrator | Thursday 18 September 2025 00:47:20 +0000 (0:00:02.345) 0:03:01.652 **** 2025-09-18 00:50:44.429712 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.429724 | orchestrator | 2025-09-18 00:50:44.429732 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-18 00:50:44.429746 | orchestrator | Thursday 18 September 2025 00:47:22 +0000 (0:00:01.343) 0:03:02.995 **** 2025-09-18 00:50:44.429756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 00:50:44.429765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 00:50:44.429865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 00:50:44.429914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.429999 | orchestrator | 2025-09-18 00:50:44.430012 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-18 00:50:44.430047 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:03.900) 0:03:06.895 **** 2025-09-18 00:50:44.430056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 00:50:44.430064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430099 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.430108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 00:50:44.430172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430200 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.430215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 00:50:44.430235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.430335 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.430349 | orchestrator | 2025-09-18 00:50:44.430359 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-18 00:50:44.430367 | orchestrator | Thursday 18 September 2025 00:47:26 +0000 (0:00:00.708) 0:03:07.603 **** 2025-09-18 00:50:44.430375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430391 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.430399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430414 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.430422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 00:50:44.430443 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.430451 | orchestrator | 2025-09-18 00:50:44.430459 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-18 00:50:44.430467 | orchestrator | Thursday 18 September 2025 00:47:28 +0000 (0:00:01.408) 0:03:09.012 **** 2025-09-18 00:50:44.430475 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.430483 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.430490 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.430498 | orchestrator | 2025-09-18 00:50:44.430506 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-18 00:50:44.430513 | orchestrator | Thursday 18 September 2025 00:47:29 +0000 (0:00:01.409) 0:03:10.422 **** 2025-09-18 00:50:44.430526 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.430534 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.430541 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.430549 | orchestrator | 2025-09-18 00:50:44.430557 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-18 00:50:44.430564 | orchestrator | Thursday 18 September 2025 00:47:31 +0000 (0:00:02.011) 0:03:12.433 **** 2025-09-18 00:50:44.430572 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.430580 | orchestrator | 2025-09-18 00:50:44.430588 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-18 00:50:44.430595 | orchestrator | Thursday 18 September 2025 00:47:33 +0000 (0:00:01.310) 0:03:13.744 **** 2025-09-18 00:50:44.430603 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 00:50:44.430611 | orchestrator | 2025-09-18 00:50:44.430619 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-18 00:50:44.430626 | orchestrator | Thursday 18 September 2025 00:47:36 +0000 (0:00:02.959) 0:03:16.704 **** 2025-09-18 00:50:44.430688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.430703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.430722 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.430736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.430745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.430753 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.430814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.430832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.430846 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.430856 | orchestrator | 2025-09-18 00:50:44.430868 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-18 00:50:44.430876 | orchestrator | Thursday 18 September 2025 00:47:38 +0000 (0:00:02.075) 0:03:18.779 **** 2025-09-18 00:50:44.430885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.430944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.430965 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.430981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.430990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.430999 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:50:44.431093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 00:50:44.431103 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431111 | orchestrator | 2025-09-18 00:50:44.431119 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-18 00:50:44.431127 | orchestrator | Thursday 18 September 2025 00:47:40 +0000 (0:00:02.454) 0:03:21.233 **** 2025-09-18 00:50:44.431139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431181 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 00:50:44.431272 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431329 | orchestrator | 2025-09-18 00:50:44.431338 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-18 00:50:44.431346 | orchestrator | Thursday 18 September 2025 00:47:43 +0000 (0:00:02.806) 0:03:24.040 **** 2025-09-18 00:50:44.431354 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.431362 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.431370 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.431377 | orchestrator | 2025-09-18 00:50:44.431385 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-18 00:50:44.431393 | orchestrator | Thursday 18 September 2025 00:47:45 +0000 (0:00:01.767) 0:03:25.808 **** 2025-09-18 00:50:44.431400 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431408 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431416 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431423 | orchestrator | 2025-09-18 00:50:44.431431 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-18 00:50:44.431439 | orchestrator | Thursday 18 September 2025 00:47:46 +0000 (0:00:01.584) 0:03:27.392 **** 2025-09-18 00:50:44.431447 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431455 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431462 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431470 | orchestrator | 2025-09-18 00:50:44.431478 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-18 00:50:44.431490 | orchestrator | Thursday 18 September 2025 00:47:47 +0000 (0:00:00.341) 0:03:27.734 **** 2025-09-18 00:50:44.431498 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.431506 | orchestrator | 2025-09-18 00:50:44.431514 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-18 00:50:44.431522 | orchestrator | Thursday 18 September 2025 00:47:48 +0000 (0:00:01.351) 0:03:29.085 **** 2025-09-18 00:50:44.431530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 00:50:44.431539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 00:50:44.431614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 00:50:44.431629 | orchestrator | 2025-09-18 00:50:44.431641 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-18 00:50:44.431649 | orchestrator | Thursday 18 September 2025 00:47:49 +0000 (0:00:01.414) 0:03:30.500 **** 2025-09-18 00:50:44.431657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 00:50:44.431670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 00:50:44.431679 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431686 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 00:50:44.431708 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431716 | orchestrator | 2025-09-18 00:50:44.431724 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-18 00:50:44.431732 | orchestrator | Thursday 18 September 2025 00:47:50 +0000 (0:00:00.437) 0:03:30.938 **** 2025-09-18 00:50:44.431740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 00:50:44.431748 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 00:50:44.431764 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 00:50:44.431833 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431846 | orchestrator | 2025-09-18 00:50:44.431855 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-18 00:50:44.431861 | orchestrator | Thursday 18 September 2025 00:47:51 +0000 (0:00:00.898) 0:03:31.836 **** 2025-09-18 00:50:44.431868 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431875 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431881 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431888 | orchestrator | 2025-09-18 00:50:44.431894 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-18 00:50:44.431901 | orchestrator | Thursday 18 September 2025 00:47:51 +0000 (0:00:00.436) 0:03:32.273 **** 2025-09-18 00:50:44.431907 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431914 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431920 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431927 | orchestrator | 2025-09-18 00:50:44.431933 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-18 00:50:44.431940 | orchestrator | Thursday 18 September 2025 00:47:52 +0000 (0:00:01.288) 0:03:33.561 **** 2025-09-18 00:50:44.431946 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.431953 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.431959 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.431965 | orchestrator | 2025-09-18 00:50:44.431972 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-18 00:50:44.431978 | orchestrator | Thursday 18 September 2025 00:47:53 +0000 (0:00:00.325) 0:03:33.887 **** 2025-09-18 00:50:44.431985 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.431991 | orchestrator | 2025-09-18 00:50:44.431998 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-18 00:50:44.432004 | orchestrator | Thursday 18 September 2025 00:47:54 +0000 (0:00:01.484) 0:03:35.371 **** 2025-09-18 00:50:44.432025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 00:50:44.432038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.432116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.432275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 00:50:44.432355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 00:50:44.432385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.432481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.432564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.432813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.432820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.432895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.432905 | orchestrator | 2025-09-18 00:50:44.432912 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-18 00:50:44.432919 | orchestrator | Thursday 18 September 2025 00:47:58 +0000 (0:00:04.271) 0:03:39.643 **** 2025-09-18 00:50:44.432930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 00:50:44.432937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.432998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.433023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 00:50:44.433208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.433225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 00:50:44.433250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433258 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.433265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.433379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 00:50:44.433457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 00:50:44.433658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.433676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.433684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 00:50:44.433713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 00:50:44.433728 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.433735 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.433741 | orchestrator | 2025-09-18 00:50:44.433748 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-18 00:50:44.433755 | orchestrator | Thursday 18 September 2025 00:48:00 +0000 (0:00:01.584) 0:03:41.227 **** 2025-09-18 00:50:44.433762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433776 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.433783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433800 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.433807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 00:50:44.433820 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.433832 | orchestrator | 2025-09-18 00:50:44.433839 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-18 00:50:44.433845 | orchestrator | Thursday 18 September 2025 00:48:02 +0000 (0:00:02.083) 0:03:43.311 **** 2025-09-18 00:50:44.433852 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.433858 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.433865 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.433872 | orchestrator | 2025-09-18 00:50:44.433878 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-18 00:50:44.433885 | orchestrator | Thursday 18 September 2025 00:48:03 +0000 (0:00:01.333) 0:03:44.644 **** 2025-09-18 00:50:44.433891 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.433898 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.433904 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.433911 | orchestrator | 2025-09-18 00:50:44.433917 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-18 00:50:44.433924 | orchestrator | Thursday 18 September 2025 00:48:06 +0000 (0:00:02.399) 0:03:47.043 **** 2025-09-18 00:50:44.433931 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.433937 | orchestrator | 2025-09-18 00:50:44.433944 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-18 00:50:44.433950 | orchestrator | Thursday 18 September 2025 00:48:07 +0000 (0:00:01.265) 0:03:48.309 **** 2025-09-18 00:50:44.433977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.433986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.433996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.434008 | orchestrator | 2025-09-18 00:50:44.434037 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-18 00:50:44.434045 | orchestrator | Thursday 18 September 2025 00:48:11 +0000 (0:00:03.773) 0:03:52.083 **** 2025-09-18 00:50:44.434052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434059 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.434085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434093 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.434100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434107 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.434114 | orchestrator | 2025-09-18 00:50:44.434121 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-18 00:50:44.434127 | orchestrator | Thursday 18 September 2025 00:48:11 +0000 (0:00:00.545) 0:03:52.628 **** 2025-09-18 00:50:44.434134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434154 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.434161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434181 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.434188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434203 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.434211 | orchestrator | 2025-09-18 00:50:44.434218 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-18 00:50:44.434226 | orchestrator | Thursday 18 September 2025 00:48:12 +0000 (0:00:00.759) 0:03:53.388 **** 2025-09-18 00:50:44.434233 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.434241 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.434249 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.434256 | orchestrator | 2025-09-18 00:50:44.434264 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-18 00:50:44.434271 | orchestrator | Thursday 18 September 2025 00:48:14 +0000 (0:00:02.071) 0:03:55.459 **** 2025-09-18 00:50:44.434280 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.434302 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.434310 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.434318 | orchestrator | 2025-09-18 00:50:44.434325 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-18 00:50:44.434333 | orchestrator | Thursday 18 September 2025 00:48:16 +0000 (0:00:01.720) 0:03:57.180 **** 2025-09-18 00:50:44.434340 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.434348 | orchestrator | 2025-09-18 00:50:44.434356 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-18 00:50:44.434363 | orchestrator | Thursday 18 September 2025 00:48:18 +0000 (0:00:01.552) 0:03:58.732 **** 2025-09-18 00:50:44.434393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.434408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.434438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.434490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434506 | orchestrator | 2025-09-18 00:50:44.434514 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-18 00:50:44.434522 | orchestrator | Thursday 18 September 2025 00:48:22 +0000 (0:00:04.250) 0:04:02.983 **** 2025-09-18 00:50:44.434548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434577 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.434587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434609 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.434634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.434646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.434664 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.434670 | orchestrator | 2025-09-18 00:50:44.434677 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-18 00:50:44.434684 | orchestrator | Thursday 18 September 2025 00:48:23 +0000 (0:00:00.958) 0:04:03.941 **** 2025-09-18 00:50:44.434691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434719 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.434725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434774 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.434782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 00:50:44.434809 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.434815 | orchestrator | 2025-09-18 00:50:44.434822 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-18 00:50:44.434829 | orchestrator | Thursday 18 September 2025 00:48:24 +0000 (0:00:00.764) 0:04:04.706 **** 2025-09-18 00:50:44.434835 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.434842 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.434848 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.434855 | orchestrator | 2025-09-18 00:50:44.434861 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-18 00:50:44.434868 | orchestrator | Thursday 18 September 2025 00:48:25 +0000 (0:00:01.237) 0:04:05.943 **** 2025-09-18 00:50:44.434875 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.434881 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.434888 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.434894 | orchestrator | 2025-09-18 00:50:44.434900 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-18 00:50:44.434907 | orchestrator | Thursday 18 September 2025 00:48:27 +0000 (0:00:01.808) 0:04:07.752 **** 2025-09-18 00:50:44.434914 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.434920 | orchestrator | 2025-09-18 00:50:44.434927 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-18 00:50:44.434933 | orchestrator | Thursday 18 September 2025 00:48:28 +0000 (0:00:01.453) 0:04:09.205 **** 2025-09-18 00:50:44.434940 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-18 00:50:44.434947 | orchestrator | 2025-09-18 00:50:44.434957 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-18 00:50:44.434964 | orchestrator | Thursday 18 September 2025 00:48:29 +0000 (0:00:00.817) 0:04:10.023 **** 2025-09-18 00:50:44.434971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 00:50:44.434978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 00:50:44.434989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 00:50:44.434996 | orchestrator | 2025-09-18 00:50:44.435002 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-18 00:50:44.435009 | orchestrator | Thursday 18 September 2025 00:48:33 +0000 (0:00:04.506) 0:04:14.530 **** 2025-09-18 00:50:44.435033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435041 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435055 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435069 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435075 | orchestrator | 2025-09-18 00:50:44.435082 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-18 00:50:44.435089 | orchestrator | Thursday 18 September 2025 00:48:34 +0000 (0:00:00.954) 0:04:15.485 **** 2025-09-18 00:50:44.435095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435113 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435137 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 00:50:44.435158 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435165 | orchestrator | 2025-09-18 00:50:44.435171 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 00:50:44.435178 | orchestrator | Thursday 18 September 2025 00:48:36 +0000 (0:00:01.405) 0:04:16.890 **** 2025-09-18 00:50:44.435185 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.435191 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.435198 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.435204 | orchestrator | 2025-09-18 00:50:44.435211 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 00:50:44.435217 | orchestrator | Thursday 18 September 2025 00:48:38 +0000 (0:00:02.268) 0:04:19.159 **** 2025-09-18 00:50:44.435224 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.435230 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.435237 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.435243 | orchestrator | 2025-09-18 00:50:44.435250 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-18 00:50:44.435257 | orchestrator | Thursday 18 September 2025 00:48:41 +0000 (0:00:02.910) 0:04:22.070 **** 2025-09-18 00:50:44.435280 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-18 00:50:44.435301 | orchestrator | 2025-09-18 00:50:44.435307 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-18 00:50:44.435314 | orchestrator | Thursday 18 September 2025 00:48:42 +0000 (0:00:01.535) 0:04:23.605 **** 2025-09-18 00:50:44.435321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435327 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435341 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435359 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435366 | orchestrator | 2025-09-18 00:50:44.435373 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-18 00:50:44.435379 | orchestrator | Thursday 18 September 2025 00:48:44 +0000 (0:00:01.301) 0:04:24.907 **** 2025-09-18 00:50:44.435391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435398 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435412 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 00:50:44.435425 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435432 | orchestrator | 2025-09-18 00:50:44.435438 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-18 00:50:44.435445 | orchestrator | Thursday 18 September 2025 00:48:45 +0000 (0:00:01.447) 0:04:26.355 **** 2025-09-18 00:50:44.435452 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435458 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435465 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435471 | orchestrator | 2025-09-18 00:50:44.435496 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 00:50:44.435504 | orchestrator | Thursday 18 September 2025 00:48:47 +0000 (0:00:01.851) 0:04:28.206 **** 2025-09-18 00:50:44.435511 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.435517 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.435524 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.435530 | orchestrator | 2025-09-18 00:50:44.435537 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 00:50:44.435544 | orchestrator | Thursday 18 September 2025 00:48:49 +0000 (0:00:02.433) 0:04:30.639 **** 2025-09-18 00:50:44.435550 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.435557 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.435563 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.435570 | orchestrator | 2025-09-18 00:50:44.435577 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-18 00:50:44.435583 | orchestrator | Thursday 18 September 2025 00:48:52 +0000 (0:00:02.952) 0:04:33.592 **** 2025-09-18 00:50:44.435590 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-18 00:50:44.435601 | orchestrator | 2025-09-18 00:50:44.435608 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-18 00:50:44.435614 | orchestrator | Thursday 18 September 2025 00:48:53 +0000 (0:00:00.900) 0:04:34.492 **** 2025-09-18 00:50:44.435621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435628 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435645 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435659 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435665 | orchestrator | 2025-09-18 00:50:44.435672 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-18 00:50:44.435679 | orchestrator | Thursday 18 September 2025 00:48:55 +0000 (0:00:01.341) 0:04:35.834 **** 2025-09-18 00:50:44.435686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435692 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435706 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 00:50:44.435742 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435749 | orchestrator | 2025-09-18 00:50:44.435756 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-18 00:50:44.435762 | orchestrator | Thursday 18 September 2025 00:48:56 +0000 (0:00:01.350) 0:04:37.185 **** 2025-09-18 00:50:44.435769 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.435776 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.435782 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.435789 | orchestrator | 2025-09-18 00:50:44.435795 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 00:50:44.435802 | orchestrator | Thursday 18 September 2025 00:48:58 +0000 (0:00:01.587) 0:04:38.772 **** 2025-09-18 00:50:44.435808 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.435815 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.435821 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.435828 | orchestrator | 2025-09-18 00:50:44.435835 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 00:50:44.435841 | orchestrator | Thursday 18 September 2025 00:49:00 +0000 (0:00:02.372) 0:04:41.145 **** 2025-09-18 00:50:44.435848 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.435854 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.435861 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.435867 | orchestrator | 2025-09-18 00:50:44.435874 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-18 00:50:44.435881 | orchestrator | Thursday 18 September 2025 00:49:03 +0000 (0:00:03.352) 0:04:44.497 **** 2025-09-18 00:50:44.435887 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.435894 | orchestrator | 2025-09-18 00:50:44.435900 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-18 00:50:44.435907 | orchestrator | Thursday 18 September 2025 00:49:05 +0000 (0:00:01.668) 0:04:46.166 **** 2025-09-18 00:50:44.435917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.435925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.435932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.435960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.435968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.435975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.435985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.435992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.435999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.436035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.436042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.436049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.436074 | orchestrator | 2025-09-18 00:50:44.436080 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-18 00:50:44.436087 | orchestrator | Thursday 18 September 2025 00:49:08 +0000 (0:00:03.435) 0:04:49.602 **** 2025-09-18 00:50:44.436124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.436133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.436140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.436175 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.436211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.436219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.436239 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.436261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 00:50:44.436268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 00:50:44.436346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 00:50:44.436353 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436359 | orchestrator | 2025-09-18 00:50:44.436366 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-18 00:50:44.436373 | orchestrator | Thursday 18 September 2025 00:49:09 +0000 (0:00:00.720) 0:04:50.322 **** 2025-09-18 00:50:44.436380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436394 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436418 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 00:50:44.436446 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436452 | orchestrator | 2025-09-18 00:50:44.436459 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-18 00:50:44.436466 | orchestrator | Thursday 18 September 2025 00:49:11 +0000 (0:00:01.623) 0:04:51.945 **** 2025-09-18 00:50:44.436472 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.436479 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.436486 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.436492 | orchestrator | 2025-09-18 00:50:44.436499 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-18 00:50:44.436505 | orchestrator | Thursday 18 September 2025 00:49:12 +0000 (0:00:01.494) 0:04:53.440 **** 2025-09-18 00:50:44.436512 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.436518 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.436525 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.436531 | orchestrator | 2025-09-18 00:50:44.436538 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-18 00:50:44.436544 | orchestrator | Thursday 18 September 2025 00:49:15 +0000 (0:00:02.258) 0:04:55.698 **** 2025-09-18 00:50:44.436551 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.436557 | orchestrator | 2025-09-18 00:50:44.436564 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-18 00:50:44.436570 | orchestrator | Thursday 18 September 2025 00:49:16 +0000 (0:00:01.343) 0:04:57.042 **** 2025-09-18 00:50:44.436597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:50:44.436606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:50:44.436613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:50:44.436629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:50:44.436654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:50:44.436663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:50:44.436670 | orchestrator | 2025-09-18 00:50:44.436676 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-18 00:50:44.436683 | orchestrator | Thursday 18 September 2025 00:49:21 +0000 (0:00:05.506) 0:05:02.549 **** 2025-09-18 00:50:44.436690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:50:44.436705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:50:44.436712 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:50:44.436744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:50:44.436752 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:50:44.436775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:50:44.436782 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436789 | orchestrator | 2025-09-18 00:50:44.436795 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-18 00:50:44.436802 | orchestrator | Thursday 18 September 2025 00:49:22 +0000 (0:00:00.668) 0:05:03.217 **** 2025-09-18 00:50:44.436809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 00:50:44.436815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436828 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 00:50:44.436856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436870 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 00:50:44.436883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 00:50:44.436900 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436906 | orchestrator | 2025-09-18 00:50:44.436912 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-18 00:50:44.436919 | orchestrator | Thursday 18 September 2025 00:49:23 +0000 (0:00:00.903) 0:05:04.121 **** 2025-09-18 00:50:44.436925 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436931 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436937 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436943 | orchestrator | 2025-09-18 00:50:44.436949 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-18 00:50:44.436955 | orchestrator | Thursday 18 September 2025 00:49:24 +0000 (0:00:00.864) 0:05:04.986 **** 2025-09-18 00:50:44.436962 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.436968 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.436974 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.436980 | orchestrator | 2025-09-18 00:50:44.436986 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-18 00:50:44.436992 | orchestrator | Thursday 18 September 2025 00:49:25 +0000 (0:00:01.406) 0:05:06.392 **** 2025-09-18 00:50:44.436998 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.437004 | orchestrator | 2025-09-18 00:50:44.437014 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-18 00:50:44.437020 | orchestrator | Thursday 18 September 2025 00:49:27 +0000 (0:00:01.399) 0:05:07.791 **** 2025-09-18 00:50:44.437026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:50:44.437033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:50:44.437090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:50:44.437146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:50:44.437185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:50:44.437224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:50:44.437269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437317 | orchestrator | 2025-09-18 00:50:44.437323 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-18 00:50:44.437330 | orchestrator | Thursday 18 September 2025 00:49:31 +0000 (0:00:04.520) 0:05:12.311 **** 2025-09-18 00:50:44.437336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 00:50:44.437342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 00:50:44.437385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 00:50:44.437407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437440 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 00:50:44.437470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437502 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 00:50:44.437515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:50:44.437524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 00:50:44.437558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 00:50:44.437565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:50:44.437584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:50:44.437598 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.437604 | orchestrator | 2025-09-18 00:50:44.437610 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-18 00:50:44.437617 | orchestrator | Thursday 18 September 2025 00:49:32 +0000 (0:00:01.296) 0:05:13.608 **** 2025-09-18 00:50:44.437623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437649 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437691 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 00:50:44.437703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 00:50:44.437720 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.437727 | orchestrator | 2025-09-18 00:50:44.437733 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-18 00:50:44.437742 | orchestrator | Thursday 18 September 2025 00:49:34 +0000 (0:00:01.130) 0:05:14.739 **** 2025-09-18 00:50:44.437748 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437755 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437761 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.437767 | orchestrator | 2025-09-18 00:50:44.437773 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-18 00:50:44.437779 | orchestrator | Thursday 18 September 2025 00:49:34 +0000 (0:00:00.464) 0:05:15.203 **** 2025-09-18 00:50:44.437785 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437791 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437797 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.437803 | orchestrator | 2025-09-18 00:50:44.437810 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-18 00:50:44.437816 | orchestrator | Thursday 18 September 2025 00:49:36 +0000 (0:00:01.567) 0:05:16.771 **** 2025-09-18 00:50:44.437822 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.437828 | orchestrator | 2025-09-18 00:50:44.437834 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-18 00:50:44.437840 | orchestrator | Thursday 18 September 2025 00:49:37 +0000 (0:00:01.774) 0:05:18.545 **** 2025-09-18 00:50:44.437846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:50:44.437857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:50:44.437864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 00:50:44.437876 | orchestrator | 2025-09-18 00:50:44.437883 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-18 00:50:44.437889 | orchestrator | Thursday 18 September 2025 00:49:40 +0000 (0:00:02.745) 0:05:21.291 **** 2025-09-18 00:50:44.437898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 00:50:44.437905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 00:50:44.437911 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437918 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 00:50:44.437934 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.437940 | orchestrator | 2025-09-18 00:50:44.437950 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-18 00:50:44.437956 | orchestrator | Thursday 18 September 2025 00:49:41 +0000 (0:00:00.423) 0:05:21.714 **** 2025-09-18 00:50:44.437963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 00:50:44.437969 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.437975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 00:50:44.437981 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.437988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 00:50:44.437994 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438000 | orchestrator | 2025-09-18 00:50:44.438006 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-18 00:50:44.438012 | orchestrator | Thursday 18 September 2025 00:49:42 +0000 (0:00:01.018) 0:05:22.732 **** 2025-09-18 00:50:44.438038 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438045 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438052 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438058 | orchestrator | 2025-09-18 00:50:44.438064 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-18 00:50:44.438070 | orchestrator | Thursday 18 September 2025 00:49:42 +0000 (0:00:00.463) 0:05:23.196 **** 2025-09-18 00:50:44.438079 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438086 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438092 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438098 | orchestrator | 2025-09-18 00:50:44.438104 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-18 00:50:44.438110 | orchestrator | Thursday 18 September 2025 00:49:43 +0000 (0:00:01.418) 0:05:24.615 **** 2025-09-18 00:50:44.438117 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:50:44.438123 | orchestrator | 2025-09-18 00:50:44.438129 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-18 00:50:44.438135 | orchestrator | Thursday 18 September 2025 00:49:45 +0000 (0:00:01.841) 0:05:26.456 **** 2025-09-18 00:50:44.438141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 00:50:44.438197 | orchestrator | 2025-09-18 00:50:44.438205 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-18 00:50:44.438212 | orchestrator | Thursday 18 September 2025 00:49:52 +0000 (0:00:06.310) 0:05:32.766 **** 2025-09-18 00:50:44.438218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438231 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438259 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 00:50:44.438296 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438303 | orchestrator | 2025-09-18 00:50:44.438309 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-18 00:50:44.438315 | orchestrator | Thursday 18 September 2025 00:49:52 +0000 (0:00:00.654) 0:05:33.421 **** 2025-09-18 00:50:44.438321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438349 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438386 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 00:50:44.438420 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438426 | orchestrator | 2025-09-18 00:50:44.438432 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-18 00:50:44.438439 | orchestrator | Thursday 18 September 2025 00:49:54 +0000 (0:00:01.672) 0:05:35.093 **** 2025-09-18 00:50:44.438445 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.438451 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.438457 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.438463 | orchestrator | 2025-09-18 00:50:44.438469 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-18 00:50:44.438476 | orchestrator | Thursday 18 September 2025 00:49:55 +0000 (0:00:01.391) 0:05:36.485 **** 2025-09-18 00:50:44.438482 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.438488 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.438494 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.438500 | orchestrator | 2025-09-18 00:50:44.438506 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-18 00:50:44.438512 | orchestrator | Thursday 18 September 2025 00:49:58 +0000 (0:00:02.258) 0:05:38.744 **** 2025-09-18 00:50:44.438518 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438525 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438531 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438537 | orchestrator | 2025-09-18 00:50:44.438543 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-18 00:50:44.438549 | orchestrator | Thursday 18 September 2025 00:49:58 +0000 (0:00:00.341) 0:05:39.085 **** 2025-09-18 00:50:44.438555 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438561 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438567 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438573 | orchestrator | 2025-09-18 00:50:44.438580 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-18 00:50:44.438586 | orchestrator | Thursday 18 September 2025 00:49:58 +0000 (0:00:00.336) 0:05:39.421 **** 2025-09-18 00:50:44.438592 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438598 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438604 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438610 | orchestrator | 2025-09-18 00:50:44.438617 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-18 00:50:44.438626 | orchestrator | Thursday 18 September 2025 00:49:59 +0000 (0:00:00.644) 0:05:40.066 **** 2025-09-18 00:50:44.438632 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438642 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438648 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438654 | orchestrator | 2025-09-18 00:50:44.438661 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-18 00:50:44.438667 | orchestrator | Thursday 18 September 2025 00:49:59 +0000 (0:00:00.319) 0:05:40.385 **** 2025-09-18 00:50:44.438673 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438679 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438685 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438691 | orchestrator | 2025-09-18 00:50:44.438697 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-18 00:50:44.438703 | orchestrator | Thursday 18 September 2025 00:50:00 +0000 (0:00:00.333) 0:05:40.719 **** 2025-09-18 00:50:44.438709 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.438715 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.438721 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.438728 | orchestrator | 2025-09-18 00:50:44.438734 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-18 00:50:44.438740 | orchestrator | Thursday 18 September 2025 00:50:00 +0000 (0:00:00.929) 0:05:41.648 **** 2025-09-18 00:50:44.438746 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438752 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438758 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438764 | orchestrator | 2025-09-18 00:50:44.438770 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-18 00:50:44.438777 | orchestrator | Thursday 18 September 2025 00:50:01 +0000 (0:00:00.810) 0:05:42.459 **** 2025-09-18 00:50:44.438783 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438789 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438795 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438801 | orchestrator | 2025-09-18 00:50:44.438807 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-18 00:50:44.438813 | orchestrator | Thursday 18 September 2025 00:50:02 +0000 (0:00:00.346) 0:05:42.806 **** 2025-09-18 00:50:44.438819 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438825 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438831 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438838 | orchestrator | 2025-09-18 00:50:44.438844 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-18 00:50:44.438850 | orchestrator | Thursday 18 September 2025 00:50:03 +0000 (0:00:00.902) 0:05:43.709 **** 2025-09-18 00:50:44.438856 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438862 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438868 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438874 | orchestrator | 2025-09-18 00:50:44.438880 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-18 00:50:44.438886 | orchestrator | Thursday 18 September 2025 00:50:04 +0000 (0:00:01.395) 0:05:45.104 **** 2025-09-18 00:50:44.438892 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438899 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438907 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438914 | orchestrator | 2025-09-18 00:50:44.438920 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-18 00:50:44.438926 | orchestrator | Thursday 18 September 2025 00:50:05 +0000 (0:00:00.883) 0:05:45.988 **** 2025-09-18 00:50:44.438932 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.438938 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.438944 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.438950 | orchestrator | 2025-09-18 00:50:44.438957 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-18 00:50:44.438963 | orchestrator | Thursday 18 September 2025 00:50:14 +0000 (0:00:09.566) 0:05:55.555 **** 2025-09-18 00:50:44.438969 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.438975 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.438981 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.438991 | orchestrator | 2025-09-18 00:50:44.438997 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-18 00:50:44.439003 | orchestrator | Thursday 18 September 2025 00:50:15 +0000 (0:00:00.800) 0:05:56.355 **** 2025-09-18 00:50:44.439009 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.439015 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.439022 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.439028 | orchestrator | 2025-09-18 00:50:44.439034 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-18 00:50:44.439040 | orchestrator | Thursday 18 September 2025 00:50:24 +0000 (0:00:08.743) 0:06:05.099 **** 2025-09-18 00:50:44.439046 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.439052 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.439058 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.439064 | orchestrator | 2025-09-18 00:50:44.439070 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-18 00:50:44.439076 | orchestrator | Thursday 18 September 2025 00:50:27 +0000 (0:00:03.213) 0:06:08.312 **** 2025-09-18 00:50:44.439082 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:50:44.439088 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:50:44.439095 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:50:44.439101 | orchestrator | 2025-09-18 00:50:44.439107 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-18 00:50:44.439113 | orchestrator | Thursday 18 September 2025 00:50:32 +0000 (0:00:04.620) 0:06:12.933 **** 2025-09-18 00:50:44.439119 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439126 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439132 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439138 | orchestrator | 2025-09-18 00:50:44.439144 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-18 00:50:44.439150 | orchestrator | Thursday 18 September 2025 00:50:32 +0000 (0:00:00.344) 0:06:13.277 **** 2025-09-18 00:50:44.439156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439162 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439168 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439174 | orchestrator | 2025-09-18 00:50:44.439181 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-18 00:50:44.439187 | orchestrator | Thursday 18 September 2025 00:50:32 +0000 (0:00:00.366) 0:06:13.644 **** 2025-09-18 00:50:44.439193 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439202 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439209 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439215 | orchestrator | 2025-09-18 00:50:44.439221 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-18 00:50:44.439227 | orchestrator | Thursday 18 September 2025 00:50:33 +0000 (0:00:00.728) 0:06:14.373 **** 2025-09-18 00:50:44.439233 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439239 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439245 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439251 | orchestrator | 2025-09-18 00:50:44.439258 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-18 00:50:44.439264 | orchestrator | Thursday 18 September 2025 00:50:34 +0000 (0:00:00.388) 0:06:14.762 **** 2025-09-18 00:50:44.439270 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439276 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439321 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439327 | orchestrator | 2025-09-18 00:50:44.439333 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-18 00:50:44.439340 | orchestrator | Thursday 18 September 2025 00:50:34 +0000 (0:00:00.376) 0:06:15.138 **** 2025-09-18 00:50:44.439346 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:50:44.439352 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:50:44.439358 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:50:44.439364 | orchestrator | 2025-09-18 00:50:44.439370 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-18 00:50:44.439382 | orchestrator | Thursday 18 September 2025 00:50:34 +0000 (0:00:00.360) 0:06:15.499 **** 2025-09-18 00:50:44.439388 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.439394 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.439401 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.439407 | orchestrator | 2025-09-18 00:50:44.439413 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-18 00:50:44.439419 | orchestrator | Thursday 18 September 2025 00:50:39 +0000 (0:00:05.145) 0:06:20.644 **** 2025-09-18 00:50:44.439425 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:50:44.439431 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:50:44.439437 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:50:44.439443 | orchestrator | 2025-09-18 00:50:44.439449 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:50:44.439455 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 00:50:44.439462 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 00:50:44.439468 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 00:50:44.439474 | orchestrator | 2025-09-18 00:50:44.439480 | orchestrator | 2025-09-18 00:50:44.439490 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:50:44.439496 | orchestrator | Thursday 18 September 2025 00:50:40 +0000 (0:00:00.856) 0:06:21.501 **** 2025-09-18 00:50:44.439502 | orchestrator | =============================================================================== 2025-09-18 00:50:44.439508 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.57s 2025-09-18 00:50:44.439514 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.74s 2025-09-18 00:50:44.439520 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.31s 2025-09-18 00:50:44.439526 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.51s 2025-09-18 00:50:44.439532 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.15s 2025-09-18 00:50:44.439539 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.14s 2025-09-18 00:50:44.439545 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.96s 2025-09-18 00:50:44.439551 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.62s 2025-09-18 00:50:44.439557 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.53s 2025-09-18 00:50:44.439563 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.52s 2025-09-18 00:50:44.439569 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.51s 2025-09-18 00:50:44.439575 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.45s 2025-09-18 00:50:44.439581 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.35s 2025-09-18 00:50:44.439587 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.30s 2025-09-18 00:50:44.439593 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.28s 2025-09-18 00:50:44.439599 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.27s 2025-09-18 00:50:44.439605 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.25s 2025-09-18 00:50:44.439611 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.18s 2025-09-18 00:50:44.439617 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.14s 2025-09-18 00:50:44.439623 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.91s 2025-09-18 00:50:44.439633 | orchestrator | 2025-09-18 00:50:44 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:44.439640 | orchestrator | 2025-09-18 00:50:44 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:44.439649 | orchestrator | 2025-09-18 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:47.473422 | orchestrator | 2025-09-18 00:50:47 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:47.474704 | orchestrator | 2025-09-18 00:50:47 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:47.477774 | orchestrator | 2025-09-18 00:50:47 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:47.477816 | orchestrator | 2025-09-18 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:50.526941 | orchestrator | 2025-09-18 00:50:50 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:50.527039 | orchestrator | 2025-09-18 00:50:50 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:50.527054 | orchestrator | 2025-09-18 00:50:50 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:50.527067 | orchestrator | 2025-09-18 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:53.550383 | orchestrator | 2025-09-18 00:50:53 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:53.550810 | orchestrator | 2025-09-18 00:50:53 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:53.551493 | orchestrator | 2025-09-18 00:50:53 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:53.551527 | orchestrator | 2025-09-18 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:56.580694 | orchestrator | 2025-09-18 00:50:56 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:56.581097 | orchestrator | 2025-09-18 00:50:56 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:56.581919 | orchestrator | 2025-09-18 00:50:56 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:56.581993 | orchestrator | 2025-09-18 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:50:59.618187 | orchestrator | 2025-09-18 00:50:59 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:50:59.619129 | orchestrator | 2025-09-18 00:50:59 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:50:59.622638 | orchestrator | 2025-09-18 00:50:59 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:50:59.623073 | orchestrator | 2025-09-18 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:02.741173 | orchestrator | 2025-09-18 00:51:02 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:02.742552 | orchestrator | 2025-09-18 00:51:02 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:02.743063 | orchestrator | 2025-09-18 00:51:02 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:02.743087 | orchestrator | 2025-09-18 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:05.772071 | orchestrator | 2025-09-18 00:51:05 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:05.772429 | orchestrator | 2025-09-18 00:51:05 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:05.773080 | orchestrator | 2025-09-18 00:51:05 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:05.773106 | orchestrator | 2025-09-18 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:08.808471 | orchestrator | 2025-09-18 00:51:08 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:08.810392 | orchestrator | 2025-09-18 00:51:08 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:08.812547 | orchestrator | 2025-09-18 00:51:08 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:08.813051 | orchestrator | 2025-09-18 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:11.849669 | orchestrator | 2025-09-18 00:51:11 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:11.850592 | orchestrator | 2025-09-18 00:51:11 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:11.851270 | orchestrator | 2025-09-18 00:51:11 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:11.851631 | orchestrator | 2025-09-18 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:14.902802 | orchestrator | 2025-09-18 00:51:14 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:14.904752 | orchestrator | 2025-09-18 00:51:14 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:14.905704 | orchestrator | 2025-09-18 00:51:14 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:14.905753 | orchestrator | 2025-09-18 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:17.949434 | orchestrator | 2025-09-18 00:51:17 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:17.949993 | orchestrator | 2025-09-18 00:51:17 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:17.951378 | orchestrator | 2025-09-18 00:51:17 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:17.951900 | orchestrator | 2025-09-18 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:20.998503 | orchestrator | 2025-09-18 00:51:20 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:21.000596 | orchestrator | 2025-09-18 00:51:21 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:21.003147 | orchestrator | 2025-09-18 00:51:21 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:21.003184 | orchestrator | 2025-09-18 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:24.051446 | orchestrator | 2025-09-18 00:51:24 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:24.051708 | orchestrator | 2025-09-18 00:51:24 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:24.052434 | orchestrator | 2025-09-18 00:51:24 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:24.052453 | orchestrator | 2025-09-18 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:27.092722 | orchestrator | 2025-09-18 00:51:27 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:27.095413 | orchestrator | 2025-09-18 00:51:27 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:27.097933 | orchestrator | 2025-09-18 00:51:27 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:27.097997 | orchestrator | 2025-09-18 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:30.142686 | orchestrator | 2025-09-18 00:51:30 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:30.143668 | orchestrator | 2025-09-18 00:51:30 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:30.145551 | orchestrator | 2025-09-18 00:51:30 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:30.146003 | orchestrator | 2025-09-18 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:33.183490 | orchestrator | 2025-09-18 00:51:33 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:33.185343 | orchestrator | 2025-09-18 00:51:33 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:33.186899 | orchestrator | 2025-09-18 00:51:33 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:33.186943 | orchestrator | 2025-09-18 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:36.237078 | orchestrator | 2025-09-18 00:51:36 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:36.241211 | orchestrator | 2025-09-18 00:51:36 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:36.243952 | orchestrator | 2025-09-18 00:51:36 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:36.244647 | orchestrator | 2025-09-18 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:39.302618 | orchestrator | 2025-09-18 00:51:39 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:39.304263 | orchestrator | 2025-09-18 00:51:39 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:39.306448 | orchestrator | 2025-09-18 00:51:39 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:39.306500 | orchestrator | 2025-09-18 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:42.352733 | orchestrator | 2025-09-18 00:51:42 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:42.354770 | orchestrator | 2025-09-18 00:51:42 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:42.358172 | orchestrator | 2025-09-18 00:51:42 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:42.358765 | orchestrator | 2025-09-18 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:45.402772 | orchestrator | 2025-09-18 00:51:45 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:45.404319 | orchestrator | 2025-09-18 00:51:45 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:45.406377 | orchestrator | 2025-09-18 00:51:45 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:45.406658 | orchestrator | 2025-09-18 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:48.458386 | orchestrator | 2025-09-18 00:51:48 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:48.461369 | orchestrator | 2025-09-18 00:51:48 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:48.463823 | orchestrator | 2025-09-18 00:51:48 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:48.464380 | orchestrator | 2025-09-18 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:51.504312 | orchestrator | 2025-09-18 00:51:51 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:51.505255 | orchestrator | 2025-09-18 00:51:51 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:51.506850 | orchestrator | 2025-09-18 00:51:51 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:51.507116 | orchestrator | 2025-09-18 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:54.560010 | orchestrator | 2025-09-18 00:51:54 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:54.562807 | orchestrator | 2025-09-18 00:51:54 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:54.565575 | orchestrator | 2025-09-18 00:51:54 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:54.565595 | orchestrator | 2025-09-18 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:51:57.616594 | orchestrator | 2025-09-18 00:51:57 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:51:57.620296 | orchestrator | 2025-09-18 00:51:57 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:51:57.622466 | orchestrator | 2025-09-18 00:51:57 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:51:57.622676 | orchestrator | 2025-09-18 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:00.674687 | orchestrator | 2025-09-18 00:52:00 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:00.677887 | orchestrator | 2025-09-18 00:52:00 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:00.681556 | orchestrator | 2025-09-18 00:52:00 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:00.682115 | orchestrator | 2025-09-18 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:03.722508 | orchestrator | 2025-09-18 00:52:03 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:03.722956 | orchestrator | 2025-09-18 00:52:03 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:03.726215 | orchestrator | 2025-09-18 00:52:03 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:03.726247 | orchestrator | 2025-09-18 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:06.768790 | orchestrator | 2025-09-18 00:52:06 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:06.772466 | orchestrator | 2025-09-18 00:52:06 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:06.773583 | orchestrator | 2025-09-18 00:52:06 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:06.773857 | orchestrator | 2025-09-18 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:09.832260 | orchestrator | 2025-09-18 00:52:09 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:09.834876 | orchestrator | 2025-09-18 00:52:09 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:09.838113 | orchestrator | 2025-09-18 00:52:09 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:09.838141 | orchestrator | 2025-09-18 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:12.882014 | orchestrator | 2025-09-18 00:52:12 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:12.883779 | orchestrator | 2025-09-18 00:52:12 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:12.885445 | orchestrator | 2025-09-18 00:52:12 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:12.885470 | orchestrator | 2025-09-18 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:15.931147 | orchestrator | 2025-09-18 00:52:15 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:15.933681 | orchestrator | 2025-09-18 00:52:15 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:15.935916 | orchestrator | 2025-09-18 00:52:15 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:15.935955 | orchestrator | 2025-09-18 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:18.981415 | orchestrator | 2025-09-18 00:52:18 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:18.983202 | orchestrator | 2025-09-18 00:52:18 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:18.984633 | orchestrator | 2025-09-18 00:52:18 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:18.984669 | orchestrator | 2025-09-18 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:22.030010 | orchestrator | 2025-09-18 00:52:22 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:22.032430 | orchestrator | 2025-09-18 00:52:22 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:22.033770 | orchestrator | 2025-09-18 00:52:22 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:22.033895 | orchestrator | 2025-09-18 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:25.076386 | orchestrator | 2025-09-18 00:52:25 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:25.077005 | orchestrator | 2025-09-18 00:52:25 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:25.078411 | orchestrator | 2025-09-18 00:52:25 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:25.078443 | orchestrator | 2025-09-18 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:28.123566 | orchestrator | 2025-09-18 00:52:28 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:28.125143 | orchestrator | 2025-09-18 00:52:28 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:28.127162 | orchestrator | 2025-09-18 00:52:28 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:28.127205 | orchestrator | 2025-09-18 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:31.173272 | orchestrator | 2025-09-18 00:52:31 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:31.174830 | orchestrator | 2025-09-18 00:52:31 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:31.176905 | orchestrator | 2025-09-18 00:52:31 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:31.177534 | orchestrator | 2025-09-18 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:34.222589 | orchestrator | 2025-09-18 00:52:34 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:34.223605 | orchestrator | 2025-09-18 00:52:34 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:34.225474 | orchestrator | 2025-09-18 00:52:34 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:34.225499 | orchestrator | 2025-09-18 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:37.270485 | orchestrator | 2025-09-18 00:52:37 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:37.272059 | orchestrator | 2025-09-18 00:52:37 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:37.274259 | orchestrator | 2025-09-18 00:52:37 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:37.274338 | orchestrator | 2025-09-18 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:40.320005 | orchestrator | 2025-09-18 00:52:40 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:40.323277 | orchestrator | 2025-09-18 00:52:40 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:40.325503 | orchestrator | 2025-09-18 00:52:40 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:40.325530 | orchestrator | 2025-09-18 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:43.376478 | orchestrator | 2025-09-18 00:52:43 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:43.376565 | orchestrator | 2025-09-18 00:52:43 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:43.377516 | orchestrator | 2025-09-18 00:52:43 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:43.377536 | orchestrator | 2025-09-18 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:46.431867 | orchestrator | 2025-09-18 00:52:46 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:46.434908 | orchestrator | 2025-09-18 00:52:46 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:46.438109 | orchestrator | 2025-09-18 00:52:46 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:46.438999 | orchestrator | 2025-09-18 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:49.492783 | orchestrator | 2025-09-18 00:52:49 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:49.494842 | orchestrator | 2025-09-18 00:52:49 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:49.497095 | orchestrator | 2025-09-18 00:52:49 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:49.497119 | orchestrator | 2025-09-18 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:52.556569 | orchestrator | 2025-09-18 00:52:52 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state STARTED 2025-09-18 00:52:52.558503 | orchestrator | 2025-09-18 00:52:52 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:52.560591 | orchestrator | 2025-09-18 00:52:52 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:52.560647 | orchestrator | 2025-09-18 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:55.618741 | orchestrator | 2025-09-18 00:52:55 | INFO  | Task f9efe551-4669-434b-badc-bed1065901bd is in state SUCCESS 2025-09-18 00:52:55.620376 | orchestrator | 2025-09-18 00:52:55.620420 | orchestrator | 2025-09-18 00:52:55.620433 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-18 00:52:55.620445 | orchestrator | 2025-09-18 00:52:55.620457 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-18 00:52:55.620468 | orchestrator | Thursday 18 September 2025 00:41:49 +0000 (0:00:00.590) 0:00:00.590 **** 2025-09-18 00:52:55.620481 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.620493 | orchestrator | 2025-09-18 00:52:55.620504 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-18 00:52:55.620515 | orchestrator | Thursday 18 September 2025 00:41:51 +0000 (0:00:01.294) 0:00:01.885 **** 2025-09-18 00:52:55.620525 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.620538 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.620548 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.620559 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.620569 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.620580 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.620590 | orchestrator | 2025-09-18 00:52:55.620601 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-18 00:52:55.620612 | orchestrator | Thursday 18 September 2025 00:41:52 +0000 (0:00:01.590) 0:00:03.475 **** 2025-09-18 00:52:55.620623 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.620633 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.620644 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.620654 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.620665 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.620675 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.620685 | orchestrator | 2025-09-18 00:52:55.620711 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-18 00:52:55.620723 | orchestrator | Thursday 18 September 2025 00:41:53 +0000 (0:00:01.009) 0:00:04.485 **** 2025-09-18 00:52:55.620733 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.620744 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.620755 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.620765 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.620776 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.620786 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.620797 | orchestrator | 2025-09-18 00:52:55.620807 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-18 00:52:55.620818 | orchestrator | Thursday 18 September 2025 00:41:54 +0000 (0:00:00.828) 0:00:05.313 **** 2025-09-18 00:52:55.620829 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.620839 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.620850 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.620860 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.620871 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.620881 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.620892 | orchestrator | 2025-09-18 00:52:55.620902 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-18 00:52:55.620913 | orchestrator | Thursday 18 September 2025 00:41:55 +0000 (0:00:00.667) 0:00:05.981 **** 2025-09-18 00:52:55.620924 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.620934 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.620945 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.620956 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.620969 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.620981 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.620994 | orchestrator | 2025-09-18 00:52:55.621006 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-18 00:52:55.621019 | orchestrator | Thursday 18 September 2025 00:41:55 +0000 (0:00:00.712) 0:00:06.694 **** 2025-09-18 00:52:55.621032 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.621045 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.621071 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.621084 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.621096 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.621108 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.621120 | orchestrator | 2025-09-18 00:52:55.621132 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-18 00:52:55.621145 | orchestrator | Thursday 18 September 2025 00:41:56 +0000 (0:00:00.893) 0:00:07.587 **** 2025-09-18 00:52:55.621158 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.621172 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.621184 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.621196 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.621209 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.621221 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.621233 | orchestrator | 2025-09-18 00:52:55.621443 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-18 00:52:55.621456 | orchestrator | Thursday 18 September 2025 00:41:57 +0000 (0:00:00.798) 0:00:08.385 **** 2025-09-18 00:52:55.621467 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.621478 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.621489 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.621500 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.621510 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.621521 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.621532 | orchestrator | 2025-09-18 00:52:55.621543 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-18 00:52:55.621554 | orchestrator | Thursday 18 September 2025 00:41:58 +0000 (0:00:00.768) 0:00:09.154 **** 2025-09-18 00:52:55.621564 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:52:55.621575 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.621586 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.621597 | orchestrator | 2025-09-18 00:52:55.621608 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-18 00:52:55.621619 | orchestrator | Thursday 18 September 2025 00:41:59 +0000 (0:00:00.580) 0:00:09.735 **** 2025-09-18 00:52:55.621630 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.621641 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.621651 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.621662 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.621672 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.621683 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.621693 | orchestrator | 2025-09-18 00:52:55.621717 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-18 00:52:55.621729 | orchestrator | Thursday 18 September 2025 00:42:00 +0000 (0:00:01.338) 0:00:11.074 **** 2025-09-18 00:52:55.621740 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:52:55.621751 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.621762 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.621772 | orchestrator | 2025-09-18 00:52:55.621783 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-18 00:52:55.621794 | orchestrator | Thursday 18 September 2025 00:42:03 +0000 (0:00:02.813) 0:00:13.888 **** 2025-09-18 00:52:55.621805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 00:52:55.621816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 00:52:55.621827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 00:52:55.621838 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.621848 | orchestrator | 2025-09-18 00:52:55.621859 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-18 00:52:55.621870 | orchestrator | Thursday 18 September 2025 00:42:03 +0000 (0:00:00.408) 0:00:14.296 **** 2025-09-18 00:52:55.621894 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.621908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.621919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.621930 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.621941 | orchestrator | 2025-09-18 00:52:55.621952 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-18 00:52:55.621963 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:00.914) 0:00:15.210 **** 2025-09-18 00:52:55.621976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622224 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622235 | orchestrator | 2025-09-18 00:52:55.622246 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-18 00:52:55.622257 | orchestrator | Thursday 18 September 2025 00:42:04 +0000 (0:00:00.149) 0:00:15.360 **** 2025-09-18 00:52:55.622282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-18 00:42:00.891846', 'end': '2025-09-18 00:42:01.148558', 'delta': '0:00:00.256712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-18 00:42:01.758767', 'end': '2025-09-18 00:42:02.043719', 'delta': '0:00:00.284952', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-18 00:42:02.736937', 'end': '2025-09-18 00:42:03.039515', 'delta': '0:00:00.302578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.622359 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622463 | orchestrator | 2025-09-18 00:52:55.622477 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-18 00:52:55.622488 | orchestrator | Thursday 18 September 2025 00:42:05 +0000 (0:00:00.438) 0:00:15.799 **** 2025-09-18 00:52:55.622499 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.622510 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.622521 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.622531 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.622542 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.622552 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.622563 | orchestrator | 2025-09-18 00:52:55.622574 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-18 00:52:55.622585 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:01.160) 0:00:16.960 **** 2025-09-18 00:52:55.622596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.622607 | orchestrator | 2025-09-18 00:52:55.622617 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-18 00:52:55.622628 | orchestrator | Thursday 18 September 2025 00:42:06 +0000 (0:00:00.655) 0:00:17.615 **** 2025-09-18 00:52:55.622639 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622649 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.622660 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.622671 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.622681 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.622692 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.622702 | orchestrator | 2025-09-18 00:52:55.622713 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-18 00:52:55.622724 | orchestrator | Thursday 18 September 2025 00:42:08 +0000 (0:00:01.395) 0:00:19.010 **** 2025-09-18 00:52:55.622734 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.622745 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622756 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.622766 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.622777 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.622787 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.622798 | orchestrator | 2025-09-18 00:52:55.622809 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 00:52:55.622819 | orchestrator | Thursday 18 September 2025 00:42:10 +0000 (0:00:02.177) 0:00:21.187 **** 2025-09-18 00:52:55.622830 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622841 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.622852 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.622862 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.622873 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.622883 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.622901 | orchestrator | 2025-09-18 00:52:55.622913 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-18 00:52:55.622923 | orchestrator | Thursday 18 September 2025 00:42:12 +0000 (0:00:01.591) 0:00:22.779 **** 2025-09-18 00:52:55.622934 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622945 | orchestrator | 2025-09-18 00:52:55.622956 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-18 00:52:55.622966 | orchestrator | Thursday 18 September 2025 00:42:12 +0000 (0:00:00.602) 0:00:23.381 **** 2025-09-18 00:52:55.622977 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.622988 | orchestrator | 2025-09-18 00:52:55.622998 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 00:52:55.623009 | orchestrator | Thursday 18 September 2025 00:42:13 +0000 (0:00:00.391) 0:00:23.773 **** 2025-09-18 00:52:55.623019 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623030 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623040 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623051 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623062 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623072 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623083 | orchestrator | 2025-09-18 00:52:55.623117 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-18 00:52:55.623218 | orchestrator | Thursday 18 September 2025 00:42:13 +0000 (0:00:00.747) 0:00:24.521 **** 2025-09-18 00:52:55.623237 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623254 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623269 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623284 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623368 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623389 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623406 | orchestrator | 2025-09-18 00:52:55.623426 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-18 00:52:55.623437 | orchestrator | Thursday 18 September 2025 00:42:14 +0000 (0:00:00.907) 0:00:25.429 **** 2025-09-18 00:52:55.623448 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623459 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623469 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623480 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623653 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623667 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623677 | orchestrator | 2025-09-18 00:52:55.623688 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-18 00:52:55.623699 | orchestrator | Thursday 18 September 2025 00:42:15 +0000 (0:00:00.809) 0:00:26.238 **** 2025-09-18 00:52:55.623709 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623720 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623730 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623741 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623751 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623769 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623780 | orchestrator | 2025-09-18 00:52:55.623791 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-18 00:52:55.623802 | orchestrator | Thursday 18 September 2025 00:42:16 +0000 (0:00:00.875) 0:00:27.114 **** 2025-09-18 00:52:55.623812 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623823 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623833 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623844 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623854 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623864 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623874 | orchestrator | 2025-09-18 00:52:55.623883 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-18 00:52:55.623893 | orchestrator | Thursday 18 September 2025 00:42:16 +0000 (0:00:00.540) 0:00:27.654 **** 2025-09-18 00:52:55.623913 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.623922 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.623931 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.623941 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.623950 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.623960 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.623969 | orchestrator | 2025-09-18 00:52:55.623978 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-18 00:52:55.623988 | orchestrator | Thursday 18 September 2025 00:42:17 +0000 (0:00:00.639) 0:00:28.294 **** 2025-09-18 00:52:55.623998 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.624007 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.624017 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.624026 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.624036 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.624045 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.624054 | orchestrator | 2025-09-18 00:52:55.624064 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-18 00:52:55.624073 | orchestrator | Thursday 18 September 2025 00:42:18 +0000 (0:00:00.685) 0:00:28.979 **** 2025-09-18 00:52:55.624085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655', 'dm-uuid-LVM-gJumQCyZ1bfxhO0dfjPwEejz9ohnhr3d478wME9KHSsMPzezVqwZlBzRBck7giHw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d', 'dm-uuid-LVM-C4siaejQmTKzx2KcnmVAte27Kk5gro23PrOOSEKuHroY5CBUeLj0Jw30TjqZQgJ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b', 'dm-uuid-LVM-LU2WChruXwDGJXhDT4p35rNV8sSdVPmlIbCWPRkS3bJzeJa8OYo8vVFzIQsRVwrj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4', 'dm-uuid-LVM-ej82I6MoUZWchGQS1y2ZyHJCdg8n8p3EWz8LoAAbpQlv51jBj80VxSmVjRuEteR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-68Bm8z-3zKE-kRH3-9AQX-alhg-bxaz-2X4H8K', 'scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca', 'scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-01LZMs-Pdh3-IDPz-xI2P-Fjst-4xgK-QgzMxM', 'scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738', 'scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97', 'scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qdztzf-yJQx-6QsS-ue8y-VY8R-Ex68-Rs4ML0', 'scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8', 'scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5', 'dm-uuid-LVM-TH2vhzQ3frcs9a69TU5wE7rT1r26iytTTaI0d0Ks3AhpggiVBlIHs2kJM5ib59Hu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lOA0Q4-7mCN-oUtO-k87H-t1uw-28O9-bDn4PP', 'scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31', 'scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0', 'dm-uuid-LVM-qycmhnh5qlb9tVSHUxG1t8mss4Ah6MDvAYLOSvYJYOvvz5TVq9e3dFYRGrXLqJpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d', 'scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624524 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.624534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TE10AL-7Csv-4u2G-ozSb-13Za-yvZs-KCadDL', 'scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739', 'scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jfe1uI-hXch-v6I9-89UP-ov5N-PxM2-Ar1e3o', 'scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd', 'scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c', 'scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.624958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.624990 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.625000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part1', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part14', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part15', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part16', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.625115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.625125 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.625135 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.625145 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.625155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:52:55.625250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.625271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:52:55.625282 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.625292 | orchestrator | 2025-09-18 00:52:55.625351 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-18 00:52:55.625361 | orchestrator | Thursday 18 September 2025 00:42:19 +0000 (0:00:01.682) 0:00:30.662 **** 2025-09-18 00:52:55.625371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655', 'dm-uuid-LVM-gJumQCyZ1bfxhO0dfjPwEejz9ohnhr3d478wME9KHSsMPzezVqwZlBzRBck7giHw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d', 'dm-uuid-LVM-C4siaejQmTKzx2KcnmVAte27Kk5gro23PrOOSEKuHroY5CBUeLj0Jw30TjqZQgJ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625482 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b', 'dm-uuid-LVM-LU2WChruXwDGJXhDT4p35rNV8sSdVPmlIbCWPRkS3bJzeJa8OYo8vVFzIQsRVwrj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625508 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4', 'dm-uuid-LVM-ej82I6MoUZWchGQS1y2ZyHJCdg8n8p3EWz8LoAAbpQlv51jBj80VxSmVjRuEteR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625618 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-68Bm8z-3zKE-kRH3-9AQX-alhg-bxaz-2X4H8K', 'scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca', 'scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.625682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-01LZMs-Pdh3-IDPz-xI2P-Fjst-4xgK-QgzMxM', 'scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738', 'scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97', 'scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qdztzf-yJQx-6QsS-ue8y-VY8R-Ex68-Rs4ML0', 'scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8', 'scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lOA0Q4-7mCN-oUtO-k87H-t1uw-28O9-bDn4PP', 'scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31', 'scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d', 'scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5', 'dm-uuid-LVM-TH2vhzQ3frcs9a69TU5wE7rT1r26iytTTaI0d0Ks3AhpggiVBlIHs2kJM5ib59Hu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0', 'dm-uuid-LVM-qycmhnh5qlb9tVSHUxG1t8mss4Ah6MDvAYLOSvYJYOvvz5TVq9e3dFYRGrXLqJpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627341 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627349 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.627363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627391 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627412 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627421 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627454 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627463 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627475 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TE10AL-7Csv-4u2G-ozSb-13Za-yvZs-KCadDL', 'scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739', 'scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627484 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627509 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jfe1uI-hXch-v6I9-89UP-ov5N-PxM2-Ar1e3o', 'scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd', 'scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627525 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c', 'scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627551 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a1b46689-8631-4ced-99c9-69cbba2d631b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627565 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627573 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.627586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627594 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627609 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627622 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627630 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627639 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627647 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627661 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627670 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627682 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part1', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part14', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part15', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part16', 'scsi-SQEMU_QEMU_HARDDISK_736f4a0a-a81c-486b-b717-c1252e00987e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627696 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627704 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.627712 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.627720 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.627732 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627758 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627768 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627778 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627801 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627811 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e65f687d-72b1-49af-9153-f020b82bb8f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627840 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:52:55.627850 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.627859 | orchestrator | 2025-09-18 00:52:55.627869 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-18 00:52:55.627879 | orchestrator | Thursday 18 September 2025 00:42:20 +0000 (0:00:00.991) 0:00:31.653 **** 2025-09-18 00:52:55.627893 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.627901 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.627909 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.627917 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.627925 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.627932 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.627948 | orchestrator | 2025-09-18 00:52:55.627956 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-18 00:52:55.627964 | orchestrator | Thursday 18 September 2025 00:42:21 +0000 (0:00:01.019) 0:00:32.673 **** 2025-09-18 00:52:55.627972 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.627979 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.627987 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.627995 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.628003 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.628011 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.628018 | orchestrator | 2025-09-18 00:52:55.628026 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 00:52:55.628034 | orchestrator | Thursday 18 September 2025 00:42:22 +0000 (0:00:00.725) 0:00:33.399 **** 2025-09-18 00:52:55.628042 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628050 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628058 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628066 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628073 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628081 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628089 | orchestrator | 2025-09-18 00:52:55.628097 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 00:52:55.628105 | orchestrator | Thursday 18 September 2025 00:42:23 +0000 (0:00:00.847) 0:00:34.246 **** 2025-09-18 00:52:55.628113 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628124 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628132 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628139 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628147 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628155 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628163 | orchestrator | 2025-09-18 00:52:55.628171 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 00:52:55.628179 | orchestrator | Thursday 18 September 2025 00:42:24 +0000 (0:00:00.706) 0:00:34.953 **** 2025-09-18 00:52:55.628186 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628194 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628202 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628210 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628217 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628225 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628233 | orchestrator | 2025-09-18 00:52:55.628241 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 00:52:55.628249 | orchestrator | Thursday 18 September 2025 00:42:25 +0000 (0:00:00.825) 0:00:35.778 **** 2025-09-18 00:52:55.628257 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628264 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628272 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628280 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628287 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628340 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628350 | orchestrator | 2025-09-18 00:52:55.628358 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-18 00:52:55.628366 | orchestrator | Thursday 18 September 2025 00:42:26 +0000 (0:00:00.955) 0:00:36.733 **** 2025-09-18 00:52:55.628374 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-18 00:52:55.628382 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-18 00:52:55.628390 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-18 00:52:55.628398 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-18 00:52:55.628406 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-18 00:52:55.628414 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-18 00:52:55.628422 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 00:52:55.628429 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-18 00:52:55.628443 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-18 00:52:55.628451 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-18 00:52:55.628459 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-18 00:52:55.628467 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-18 00:52:55.628475 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-18 00:52:55.628482 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-18 00:52:55.628490 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-18 00:52:55.628498 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-18 00:52:55.628506 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-18 00:52:55.628514 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-18 00:52:55.628522 | orchestrator | 2025-09-18 00:52:55.628529 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-18 00:52:55.628537 | orchestrator | Thursday 18 September 2025 00:42:29 +0000 (0:00:03.392) 0:00:40.125 **** 2025-09-18 00:52:55.628545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 00:52:55.628553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 00:52:55.628561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 00:52:55.628569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 00:52:55.628576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 00:52:55.628584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 00:52:55.628592 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 00:52:55.628607 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 00:52:55.628615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 00:52:55.628628 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:52:55.628644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:52:55.628652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:52:55.628660 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-18 00:52:55.628675 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-18 00:52:55.628683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-18 00:52:55.628691 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628698 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628706 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-18 00:52:55.628714 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-18 00:52:55.628722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-18 00:52:55.628730 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628737 | orchestrator | 2025-09-18 00:52:55.628745 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-18 00:52:55.628753 | orchestrator | Thursday 18 September 2025 00:42:30 +0000 (0:00:00.600) 0:00:40.726 **** 2025-09-18 00:52:55.628761 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.628769 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.628776 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.628788 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.628796 | orchestrator | 2025-09-18 00:52:55.628804 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 00:52:55.628813 | orchestrator | Thursday 18 September 2025 00:42:31 +0000 (0:00:01.037) 0:00:41.763 **** 2025-09-18 00:52:55.628826 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628834 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628841 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628849 | orchestrator | 2025-09-18 00:52:55.628857 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 00:52:55.628864 | orchestrator | Thursday 18 September 2025 00:42:31 +0000 (0:00:00.390) 0:00:42.153 **** 2025-09-18 00:52:55.628871 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628878 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628884 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628891 | orchestrator | 2025-09-18 00:52:55.628897 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 00:52:55.628904 | orchestrator | Thursday 18 September 2025 00:42:31 +0000 (0:00:00.417) 0:00:42.571 **** 2025-09-18 00:52:55.628911 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.628917 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.628924 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.628930 | orchestrator | 2025-09-18 00:52:55.628937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 00:52:55.628944 | orchestrator | Thursday 18 September 2025 00:42:32 +0000 (0:00:00.465) 0:00:43.036 **** 2025-09-18 00:52:55.628950 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.628957 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.628964 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.628970 | orchestrator | 2025-09-18 00:52:55.628977 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 00:52:55.628984 | orchestrator | Thursday 18 September 2025 00:42:33 +0000 (0:00:00.680) 0:00:43.717 **** 2025-09-18 00:52:55.628990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.628997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.629004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.629010 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629017 | orchestrator | 2025-09-18 00:52:55.629023 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 00:52:55.629030 | orchestrator | Thursday 18 September 2025 00:42:33 +0000 (0:00:00.426) 0:00:44.143 **** 2025-09-18 00:52:55.629037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.629043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.629050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.629056 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629063 | orchestrator | 2025-09-18 00:52:55.629069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 00:52:55.629076 | orchestrator | Thursday 18 September 2025 00:42:33 +0000 (0:00:00.412) 0:00:44.555 **** 2025-09-18 00:52:55.629083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.629089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.629096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.629102 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629109 | orchestrator | 2025-09-18 00:52:55.629116 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 00:52:55.629122 | orchestrator | Thursday 18 September 2025 00:42:34 +0000 (0:00:00.344) 0:00:44.900 **** 2025-09-18 00:52:55.629129 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629135 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629142 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629148 | orchestrator | 2025-09-18 00:52:55.629155 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 00:52:55.629162 | orchestrator | Thursday 18 September 2025 00:42:34 +0000 (0:00:00.405) 0:00:45.306 **** 2025-09-18 00:52:55.629168 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 00:52:55.629180 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 00:52:55.629187 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 00:52:55.629193 | orchestrator | 2025-09-18 00:52:55.629203 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-18 00:52:55.629210 | orchestrator | Thursday 18 September 2025 00:42:35 +0000 (0:00:01.357) 0:00:46.664 **** 2025-09-18 00:52:55.629217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:52:55.629224 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.629230 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.629237 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 00:52:55.629244 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 00:52:55.629250 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 00:52:55.629257 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 00:52:55.629263 | orchestrator | 2025-09-18 00:52:55.629270 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-18 00:52:55.629277 | orchestrator | Thursday 18 September 2025 00:42:36 +0000 (0:00:00.994) 0:00:47.658 **** 2025-09-18 00:52:55.629283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:52:55.629290 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.629310 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.629320 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 00:52:55.629326 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 00:52:55.629333 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 00:52:55.629339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 00:52:55.629346 | orchestrator | 2025-09-18 00:52:55.629353 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.629359 | orchestrator | Thursday 18 September 2025 00:42:38 +0000 (0:00:01.835) 0:00:49.494 **** 2025-09-18 00:52:55.629366 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.629373 | orchestrator | 2025-09-18 00:52:55.629379 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.629386 | orchestrator | Thursday 18 September 2025 00:42:40 +0000 (0:00:01.438) 0:00:50.933 **** 2025-09-18 00:52:55.629393 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.629400 | orchestrator | 2025-09-18 00:52:55.629406 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.629413 | orchestrator | Thursday 18 September 2025 00:42:41 +0000 (0:00:01.215) 0:00:52.148 **** 2025-09-18 00:52:55.629419 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629426 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.629433 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.629439 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.629446 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.629452 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.629459 | orchestrator | 2025-09-18 00:52:55.629466 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.629472 | orchestrator | Thursday 18 September 2025 00:42:42 +0000 (0:00:01.197) 0:00:53.345 **** 2025-09-18 00:52:55.629486 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629492 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.629499 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.629506 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629512 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629519 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629525 | orchestrator | 2025-09-18 00:52:55.629532 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.629538 | orchestrator | Thursday 18 September 2025 00:42:43 +0000 (0:00:01.047) 0:00:54.393 **** 2025-09-18 00:52:55.629545 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629551 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629558 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.629565 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629571 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.629578 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629584 | orchestrator | 2025-09-18 00:52:55.629591 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.629598 | orchestrator | Thursday 18 September 2025 00:42:44 +0000 (0:00:00.993) 0:00:55.387 **** 2025-09-18 00:52:55.629604 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629611 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629617 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.629624 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.629630 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629637 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629643 | orchestrator | 2025-09-18 00:52:55.629650 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.629657 | orchestrator | Thursday 18 September 2025 00:42:45 +0000 (0:00:00.786) 0:00:56.174 **** 2025-09-18 00:52:55.629663 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629670 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.629677 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.629683 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.629690 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.629696 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.629703 | orchestrator | 2025-09-18 00:52:55.629710 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.629720 | orchestrator | Thursday 18 September 2025 00:42:46 +0000 (0:00:01.230) 0:00:57.404 **** 2025-09-18 00:52:55.629726 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629733 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.629740 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.629746 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629753 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.629759 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.629766 | orchestrator | 2025-09-18 00:52:55.629773 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.629779 | orchestrator | Thursday 18 September 2025 00:42:47 +0000 (0:00:00.710) 0:00:58.114 **** 2025-09-18 00:52:55.629786 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629792 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.629799 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.629805 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629812 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.629819 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.629825 | orchestrator | 2025-09-18 00:52:55.629832 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.629838 | orchestrator | Thursday 18 September 2025 00:42:48 +0000 (0:00:01.239) 0:00:59.354 **** 2025-09-18 00:52:55.629845 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629851 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629858 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629865 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.629875 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.629882 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.629888 | orchestrator | 2025-09-18 00:52:55.629895 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.629904 | orchestrator | Thursday 18 September 2025 00:42:50 +0000 (0:00:02.078) 0:01:01.432 **** 2025-09-18 00:52:55.629911 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.629918 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.629924 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.629931 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.629937 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.629944 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.629950 | orchestrator | 2025-09-18 00:52:55.629957 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.629964 | orchestrator | Thursday 18 September 2025 00:42:52 +0000 (0:00:01.460) 0:01:02.893 **** 2025-09-18 00:52:55.629970 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.629977 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.629984 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.629990 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.629997 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630003 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630010 | orchestrator | 2025-09-18 00:52:55.630048 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.630056 | orchestrator | Thursday 18 September 2025 00:42:53 +0000 (0:00:00.867) 0:01:03.760 **** 2025-09-18 00:52:55.630063 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630070 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630077 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630083 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.630090 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.630096 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.630103 | orchestrator | 2025-09-18 00:52:55.630110 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.630116 | orchestrator | Thursday 18 September 2025 00:42:53 +0000 (0:00:00.638) 0:01:04.398 **** 2025-09-18 00:52:55.630123 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.630129 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.630136 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.630142 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630148 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630155 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630161 | orchestrator | 2025-09-18 00:52:55.630168 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.630175 | orchestrator | Thursday 18 September 2025 00:42:54 +0000 (0:00:00.690) 0:01:05.089 **** 2025-09-18 00:52:55.630181 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.630188 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.630194 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.630201 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630207 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630214 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630220 | orchestrator | 2025-09-18 00:52:55.630227 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.630233 | orchestrator | Thursday 18 September 2025 00:42:54 +0000 (0:00:00.497) 0:01:05.586 **** 2025-09-18 00:52:55.630240 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.630246 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.630253 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.630259 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630265 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630272 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630279 | orchestrator | 2025-09-18 00:52:55.630285 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.630292 | orchestrator | Thursday 18 September 2025 00:42:55 +0000 (0:00:00.652) 0:01:06.239 **** 2025-09-18 00:52:55.630336 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630343 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630350 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630357 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630363 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630370 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630376 | orchestrator | 2025-09-18 00:52:55.630383 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.630390 | orchestrator | Thursday 18 September 2025 00:42:56 +0000 (0:00:00.499) 0:01:06.739 **** 2025-09-18 00:52:55.630396 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630403 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630409 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630416 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630423 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630429 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630436 | orchestrator | 2025-09-18 00:52:55.630453 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.630460 | orchestrator | Thursday 18 September 2025 00:42:56 +0000 (0:00:00.678) 0:01:07.417 **** 2025-09-18 00:52:55.630467 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630474 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630480 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630487 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.630493 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.630500 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.630507 | orchestrator | 2025-09-18 00:52:55.630513 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.630520 | orchestrator | Thursday 18 September 2025 00:42:57 +0000 (0:00:00.533) 0:01:07.951 **** 2025-09-18 00:52:55.630527 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.630533 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.630540 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.630547 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.630553 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.630560 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.630566 | orchestrator | 2025-09-18 00:52:55.630572 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.630579 | orchestrator | Thursday 18 September 2025 00:42:57 +0000 (0:00:00.748) 0:01:08.699 **** 2025-09-18 00:52:55.630585 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.630591 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.630597 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.630603 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.630609 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.630615 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.630621 | orchestrator | 2025-09-18 00:52:55.630630 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-18 00:52:55.630636 | orchestrator | Thursday 18 September 2025 00:42:59 +0000 (0:00:01.272) 0:01:09.972 **** 2025-09-18 00:52:55.630643 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.630649 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.630655 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.630661 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.630667 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.630673 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.630679 | orchestrator | 2025-09-18 00:52:55.630685 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-18 00:52:55.630692 | orchestrator | Thursday 18 September 2025 00:43:01 +0000 (0:00:02.059) 0:01:12.031 **** 2025-09-18 00:52:55.630698 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.630704 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.630710 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.630720 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.630726 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.630732 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.630738 | orchestrator | 2025-09-18 00:52:55.630744 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-18 00:52:55.630751 | orchestrator | Thursday 18 September 2025 00:43:03 +0000 (0:00:02.456) 0:01:14.488 **** 2025-09-18 00:52:55.630757 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.630763 | orchestrator | 2025-09-18 00:52:55.630769 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-18 00:52:55.630775 | orchestrator | Thursday 18 September 2025 00:43:04 +0000 (0:00:01.057) 0:01:15.546 **** 2025-09-18 00:52:55.630781 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630787 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630793 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630799 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630805 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630811 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630818 | orchestrator | 2025-09-18 00:52:55.630824 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-18 00:52:55.630830 | orchestrator | Thursday 18 September 2025 00:43:05 +0000 (0:00:00.550) 0:01:16.098 **** 2025-09-18 00:52:55.630836 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.630860 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.630867 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.630873 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.630879 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.630885 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.630891 | orchestrator | 2025-09-18 00:52:55.630897 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-18 00:52:55.630903 | orchestrator | Thursday 18 September 2025 00:43:06 +0000 (0:00:00.748) 0:01:16.846 **** 2025-09-18 00:52:55.630909 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630916 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630922 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630928 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630934 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630940 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 00:52:55.630946 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630952 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630959 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630965 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630971 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630981 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 00:52:55.630988 | orchestrator | 2025-09-18 00:52:55.630994 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-18 00:52:55.631000 | orchestrator | Thursday 18 September 2025 00:43:07 +0000 (0:00:01.308) 0:01:18.155 **** 2025-09-18 00:52:55.631006 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.631012 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.631019 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.631025 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.631036 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.631042 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.631048 | orchestrator | 2025-09-18 00:52:55.631054 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-18 00:52:55.631060 | orchestrator | Thursday 18 September 2025 00:43:08 +0000 (0:00:01.415) 0:01:19.570 **** 2025-09-18 00:52:55.631066 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631073 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631079 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631085 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631091 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631097 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631103 | orchestrator | 2025-09-18 00:52:55.631110 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-18 00:52:55.631116 | orchestrator | Thursday 18 September 2025 00:43:09 +0000 (0:00:00.757) 0:01:20.328 **** 2025-09-18 00:52:55.631122 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631128 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631134 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631143 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631149 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631155 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631162 | orchestrator | 2025-09-18 00:52:55.631168 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-18 00:52:55.631174 | orchestrator | Thursday 18 September 2025 00:43:10 +0000 (0:00:00.758) 0:01:21.087 **** 2025-09-18 00:52:55.631183 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631193 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631203 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631213 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631223 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631233 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631242 | orchestrator | 2025-09-18 00:52:55.631252 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-18 00:52:55.631261 | orchestrator | Thursday 18 September 2025 00:43:10 +0000 (0:00:00.567) 0:01:21.654 **** 2025-09-18 00:52:55.631271 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.631280 | orchestrator | 2025-09-18 00:52:55.631289 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-18 00:52:55.631315 | orchestrator | Thursday 18 September 2025 00:43:12 +0000 (0:00:01.121) 0:01:22.776 **** 2025-09-18 00:52:55.631326 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.631336 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.631345 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.631355 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.631365 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.631374 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.631383 | orchestrator | 2025-09-18 00:52:55.631393 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-18 00:52:55.631403 | orchestrator | Thursday 18 September 2025 00:44:09 +0000 (0:00:57.648) 0:02:20.425 **** 2025-09-18 00:52:55.631412 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631423 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631433 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631443 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631454 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631463 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631469 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631482 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631488 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631494 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631500 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631506 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631512 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631518 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631525 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631531 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631537 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631543 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631549 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631555 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631561 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 00:52:55.631572 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 00:52:55.631579 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 00:52:55.631585 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631591 | orchestrator | 2025-09-18 00:52:55.631597 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-18 00:52:55.631603 | orchestrator | Thursday 18 September 2025 00:44:10 +0000 (0:00:00.716) 0:02:21.141 **** 2025-09-18 00:52:55.631609 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631615 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631621 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631627 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631633 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631640 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631646 | orchestrator | 2025-09-18 00:52:55.631652 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-18 00:52:55.631658 | orchestrator | Thursday 18 September 2025 00:44:11 +0000 (0:00:00.797) 0:02:21.939 **** 2025-09-18 00:52:55.631664 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631670 | orchestrator | 2025-09-18 00:52:55.631676 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-18 00:52:55.631682 | orchestrator | Thursday 18 September 2025 00:44:11 +0000 (0:00:00.168) 0:02:22.108 **** 2025-09-18 00:52:55.631688 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631694 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631700 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631707 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631712 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631718 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631725 | orchestrator | 2025-09-18 00:52:55.631731 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-18 00:52:55.631737 | orchestrator | Thursday 18 September 2025 00:44:12 +0000 (0:00:00.655) 0:02:22.763 **** 2025-09-18 00:52:55.631744 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631750 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631756 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631780 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631787 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631793 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631799 | orchestrator | 2025-09-18 00:52:55.631805 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-18 00:52:55.631815 | orchestrator | Thursday 18 September 2025 00:44:12 +0000 (0:00:00.914) 0:02:23.678 **** 2025-09-18 00:52:55.631821 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.631827 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.631833 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.631839 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.631845 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.631851 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.631857 | orchestrator | 2025-09-18 00:52:55.631863 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-18 00:52:55.631869 | orchestrator | Thursday 18 September 2025 00:44:13 +0000 (0:00:00.698) 0:02:24.376 **** 2025-09-18 00:52:55.631875 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.631881 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.631887 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.631894 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.631900 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.631906 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.631912 | orchestrator | 2025-09-18 00:52:55.631918 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-18 00:52:55.631924 | orchestrator | Thursday 18 September 2025 00:44:16 +0000 (0:00:02.576) 0:02:26.953 **** 2025-09-18 00:52:55.631930 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.631936 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.631942 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.631948 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.631954 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.631960 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.631966 | orchestrator | 2025-09-18 00:52:55.631973 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-18 00:52:55.631979 | orchestrator | Thursday 18 September 2025 00:44:16 +0000 (0:00:00.543) 0:02:27.496 **** 2025-09-18 00:52:55.631986 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.631993 | orchestrator | 2025-09-18 00:52:55.631999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-18 00:52:55.632005 | orchestrator | Thursday 18 September 2025 00:44:17 +0000 (0:00:00.916) 0:02:28.412 **** 2025-09-18 00:52:55.632011 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632017 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632023 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632030 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632036 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632042 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632048 | orchestrator | 2025-09-18 00:52:55.632054 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-18 00:52:55.632060 | orchestrator | Thursday 18 September 2025 00:44:18 +0000 (0:00:00.689) 0:02:29.102 **** 2025-09-18 00:52:55.632066 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632072 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632079 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632085 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632091 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632097 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632103 | orchestrator | 2025-09-18 00:52:55.632109 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-18 00:52:55.632115 | orchestrator | Thursday 18 September 2025 00:44:18 +0000 (0:00:00.520) 0:02:29.623 **** 2025-09-18 00:52:55.632121 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632127 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632133 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632139 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632149 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632158 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632165 | orchestrator | 2025-09-18 00:52:55.632171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-18 00:52:55.632177 | orchestrator | Thursday 18 September 2025 00:44:19 +0000 (0:00:00.479) 0:02:30.102 **** 2025-09-18 00:52:55.632183 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632189 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632195 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632201 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632207 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632213 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632219 | orchestrator | 2025-09-18 00:52:55.632225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-18 00:52:55.632232 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.772) 0:02:30.875 **** 2025-09-18 00:52:55.632238 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632244 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632250 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632256 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632262 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632268 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632274 | orchestrator | 2025-09-18 00:52:55.632280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-18 00:52:55.632286 | orchestrator | Thursday 18 September 2025 00:44:20 +0000 (0:00:00.657) 0:02:31.532 **** 2025-09-18 00:52:55.632292 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632313 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632320 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632326 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632335 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632341 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632347 | orchestrator | 2025-09-18 00:52:55.632353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-18 00:52:55.632359 | orchestrator | Thursday 18 September 2025 00:44:21 +0000 (0:00:00.749) 0:02:32.281 **** 2025-09-18 00:52:55.632366 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632372 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632377 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632384 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632390 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632396 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632402 | orchestrator | 2025-09-18 00:52:55.632408 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-18 00:52:55.632414 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:00.609) 0:02:32.890 **** 2025-09-18 00:52:55.632420 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.632426 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.632432 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.632438 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.632444 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.632450 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.632457 | orchestrator | 2025-09-18 00:52:55.632463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-18 00:52:55.632469 | orchestrator | Thursday 18 September 2025 00:44:22 +0000 (0:00:00.741) 0:02:33.632 **** 2025-09-18 00:52:55.632475 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.632481 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.632487 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.632493 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.632500 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.632506 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.632512 | orchestrator | 2025-09-18 00:52:55.632518 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-18 00:52:55.632530 | orchestrator | Thursday 18 September 2025 00:44:24 +0000 (0:00:01.331) 0:02:34.964 **** 2025-09-18 00:52:55.632537 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.632543 | orchestrator | 2025-09-18 00:52:55.632549 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-18 00:52:55.632555 | orchestrator | Thursday 18 September 2025 00:44:25 +0000 (0:00:01.149) 0:02:36.113 **** 2025-09-18 00:52:55.632561 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-18 00:52:55.632568 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-18 00:52:55.632574 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-18 00:52:55.632580 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632586 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632592 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-18 00:52:55.632598 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632604 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-18 00:52:55.632610 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632617 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-18 00:52:55.632623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632635 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632647 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632653 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-18 00:52:55.632659 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632678 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632688 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632700 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-18 00:52:55.632718 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632724 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632730 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632736 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-18 00:52:55.632742 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632748 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632754 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632761 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632773 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-18 00:52:55.632785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632794 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632806 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632819 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-18 00:52:55.632831 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632837 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632843 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632849 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632855 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632862 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632868 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-18 00:52:55.632874 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632880 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632886 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632892 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.632898 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632904 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-18 00:52:55.632910 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632916 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632922 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632928 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.632934 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 00:52:55.632940 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.632946 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.632952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632958 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.632965 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.632971 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.632977 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 00:52:55.632983 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.632989 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.632995 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633001 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.633007 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.633013 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.633019 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-18 00:52:55.633026 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 00:52:55.633032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.633038 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633044 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.633060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633067 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-18 00:52:55.633073 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-18 00:52:55.633079 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 00:52:55.633085 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-18 00:52:55.633092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633098 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.633104 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-18 00:52:55.633110 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-18 00:52:55.633116 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 00:52:55.633128 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-18 00:52:55.633134 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 00:52:55.633141 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-18 00:52:55.633147 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-18 00:52:55.633153 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-18 00:52:55.633162 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-18 00:52:55.633168 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-18 00:52:55.633174 | orchestrator | 2025-09-18 00:52:55.633181 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-18 00:52:55.633187 | orchestrator | Thursday 18 September 2025 00:44:32 +0000 (0:00:06.969) 0:02:43.083 **** 2025-09-18 00:52:55.633193 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633199 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633205 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633211 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.633217 | orchestrator | 2025-09-18 00:52:55.633223 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-18 00:52:55.633230 | orchestrator | Thursday 18 September 2025 00:44:33 +0000 (0:00:01.059) 0:02:44.142 **** 2025-09-18 00:52:55.633236 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633243 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633249 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633255 | orchestrator | 2025-09-18 00:52:55.633261 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-18 00:52:55.633267 | orchestrator | Thursday 18 September 2025 00:44:34 +0000 (0:00:00.647) 0:02:44.790 **** 2025-09-18 00:52:55.633273 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633279 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633286 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633292 | orchestrator | 2025-09-18 00:52:55.633331 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-18 00:52:55.633338 | orchestrator | Thursday 18 September 2025 00:44:35 +0000 (0:00:01.292) 0:02:46.083 **** 2025-09-18 00:52:55.633348 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.633355 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.633361 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.633367 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633373 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633379 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633385 | orchestrator | 2025-09-18 00:52:55.633391 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-18 00:52:55.633398 | orchestrator | Thursday 18 September 2025 00:44:36 +0000 (0:00:00.832) 0:02:46.915 **** 2025-09-18 00:52:55.633404 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.633410 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.633416 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.633422 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633428 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633434 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633439 | orchestrator | 2025-09-18 00:52:55.633445 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-18 00:52:55.633450 | orchestrator | Thursday 18 September 2025 00:44:36 +0000 (0:00:00.778) 0:02:47.694 **** 2025-09-18 00:52:55.633455 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633461 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633466 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633472 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633477 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633482 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633488 | orchestrator | 2025-09-18 00:52:55.633493 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-18 00:52:55.633498 | orchestrator | Thursday 18 September 2025 00:44:37 +0000 (0:00:00.549) 0:02:48.243 **** 2025-09-18 00:52:55.633507 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633513 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633518 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633524 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633529 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633534 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633540 | orchestrator | 2025-09-18 00:52:55.633545 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-18 00:52:55.633550 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.714) 0:02:48.958 **** 2025-09-18 00:52:55.633556 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633561 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633566 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633572 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633577 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633582 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633588 | orchestrator | 2025-09-18 00:52:55.633593 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-18 00:52:55.633598 | orchestrator | Thursday 18 September 2025 00:44:38 +0000 (0:00:00.633) 0:02:49.592 **** 2025-09-18 00:52:55.633604 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633609 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633615 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633620 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633625 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633631 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633636 | orchestrator | 2025-09-18 00:52:55.633641 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-18 00:52:55.633649 | orchestrator | Thursday 18 September 2025 00:44:39 +0000 (0:00:00.699) 0:02:50.291 **** 2025-09-18 00:52:55.633655 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633660 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633669 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633674 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633680 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633685 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633690 | orchestrator | 2025-09-18 00:52:55.633696 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-18 00:52:55.633701 | orchestrator | Thursday 18 September 2025 00:44:40 +0000 (0:00:01.203) 0:02:51.495 **** 2025-09-18 00:52:55.633707 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633712 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633717 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633723 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633728 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633733 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633739 | orchestrator | 2025-09-18 00:52:55.633744 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-18 00:52:55.633750 | orchestrator | Thursday 18 September 2025 00:44:41 +0000 (0:00:00.663) 0:02:52.158 **** 2025-09-18 00:52:55.633755 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633760 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633766 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633771 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.633776 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.633782 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.633787 | orchestrator | 2025-09-18 00:52:55.633792 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-18 00:52:55.633798 | orchestrator | Thursday 18 September 2025 00:44:45 +0000 (0:00:03.898) 0:02:56.057 **** 2025-09-18 00:52:55.633803 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.633809 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.633814 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.633819 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633825 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633830 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633835 | orchestrator | 2025-09-18 00:52:55.633841 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-18 00:52:55.633846 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.753) 0:02:56.811 **** 2025-09-18 00:52:55.633852 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.633857 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.633862 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633868 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.633873 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633878 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633884 | orchestrator | 2025-09-18 00:52:55.633889 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-18 00:52:55.633894 | orchestrator | Thursday 18 September 2025 00:44:46 +0000 (0:00:00.821) 0:02:57.632 **** 2025-09-18 00:52:55.633900 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.633905 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.633910 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.633916 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633921 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633926 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633932 | orchestrator | 2025-09-18 00:52:55.633937 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-18 00:52:55.633942 | orchestrator | Thursday 18 September 2025 00:44:47 +0000 (0:00:00.682) 0:02:58.315 **** 2025-09-18 00:52:55.633948 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633953 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633963 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.633968 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.633974 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.633979 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.633985 | orchestrator | 2025-09-18 00:52:55.633993 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-18 00:52:55.633998 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.784) 0:02:59.099 **** 2025-09-18 00:52:55.634005 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-18 00:52:55.634012 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-18 00:52:55.634040 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-18 00:52:55.634049 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-18 00:52:55.634055 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634060 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-18 00:52:55.634066 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-18 00:52:55.634072 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634077 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634083 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634088 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634094 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634099 | orchestrator | 2025-09-18 00:52:55.634105 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-18 00:52:55.634110 | orchestrator | Thursday 18 September 2025 00:44:48 +0000 (0:00:00.569) 0:02:59.669 **** 2025-09-18 00:52:55.634116 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634121 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634127 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634132 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634137 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634143 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634148 | orchestrator | 2025-09-18 00:52:55.634154 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-18 00:52:55.634159 | orchestrator | Thursday 18 September 2025 00:44:49 +0000 (0:00:00.868) 0:03:00.537 **** 2025-09-18 00:52:55.634164 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634174 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634180 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634185 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634190 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634196 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634201 | orchestrator | 2025-09-18 00:52:55.634206 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 00:52:55.634212 | orchestrator | Thursday 18 September 2025 00:44:50 +0000 (0:00:00.768) 0:03:01.306 **** 2025-09-18 00:52:55.634217 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634223 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634228 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634233 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634238 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634244 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634249 | orchestrator | 2025-09-18 00:52:55.634255 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 00:52:55.634260 | orchestrator | Thursday 18 September 2025 00:44:51 +0000 (0:00:00.794) 0:03:02.100 **** 2025-09-18 00:52:55.634265 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634271 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634276 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634281 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634287 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634292 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634311 | orchestrator | 2025-09-18 00:52:55.634317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 00:52:55.634322 | orchestrator | Thursday 18 September 2025 00:44:52 +0000 (0:00:00.775) 0:03:02.876 **** 2025-09-18 00:52:55.634328 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634343 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634349 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634354 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634360 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634365 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634371 | orchestrator | 2025-09-18 00:52:55.634376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 00:52:55.634381 | orchestrator | Thursday 18 September 2025 00:44:52 +0000 (0:00:00.751) 0:03:03.627 **** 2025-09-18 00:52:55.634387 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.634392 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.634398 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634403 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.634408 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634414 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634419 | orchestrator | 2025-09-18 00:52:55.634424 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 00:52:55.634430 | orchestrator | Thursday 18 September 2025 00:44:53 +0000 (0:00:00.578) 0:03:04.206 **** 2025-09-18 00:52:55.634435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.634440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.634446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.634451 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634456 | orchestrator | 2025-09-18 00:52:55.634462 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 00:52:55.634467 | orchestrator | Thursday 18 September 2025 00:44:54 +0000 (0:00:00.525) 0:03:04.732 **** 2025-09-18 00:52:55.634475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.634481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.634486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.634496 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634501 | orchestrator | 2025-09-18 00:52:55.634507 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 00:52:55.634512 | orchestrator | Thursday 18 September 2025 00:44:54 +0000 (0:00:00.507) 0:03:05.239 **** 2025-09-18 00:52:55.634517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.634523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.634528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.634533 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634539 | orchestrator | 2025-09-18 00:52:55.634544 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 00:52:55.634550 | orchestrator | Thursday 18 September 2025 00:44:55 +0000 (0:00:00.737) 0:03:05.977 **** 2025-09-18 00:52:55.634555 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.634560 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.634566 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634571 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634576 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.634582 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634587 | orchestrator | 2025-09-18 00:52:55.634592 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 00:52:55.634598 | orchestrator | Thursday 18 September 2025 00:44:56 +0000 (0:00:00.944) 0:03:06.922 **** 2025-09-18 00:52:55.634603 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 00:52:55.634608 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 00:52:55.634614 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-18 00:52:55.634619 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634624 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-18 00:52:55.634630 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634635 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 00:52:55.634640 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-18 00:52:55.634646 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634651 | orchestrator | 2025-09-18 00:52:55.634656 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-18 00:52:55.634662 | orchestrator | Thursday 18 September 2025 00:44:58 +0000 (0:00:02.480) 0:03:09.403 **** 2025-09-18 00:52:55.634667 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.634673 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.634678 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.634683 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.634689 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.634694 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.634699 | orchestrator | 2025-09-18 00:52:55.634705 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.634710 | orchestrator | Thursday 18 September 2025 00:45:01 +0000 (0:00:03.287) 0:03:12.690 **** 2025-09-18 00:52:55.634715 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.634721 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.634726 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.634731 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.634737 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.634742 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.634747 | orchestrator | 2025-09-18 00:52:55.634753 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-18 00:52:55.634758 | orchestrator | Thursday 18 September 2025 00:45:04 +0000 (0:00:02.830) 0:03:15.520 **** 2025-09-18 00:52:55.634763 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.634769 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.634774 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.634779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.634788 | orchestrator | 2025-09-18 00:52:55.634794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-18 00:52:55.634799 | orchestrator | Thursday 18 September 2025 00:45:05 +0000 (0:00:00.876) 0:03:16.396 **** 2025-09-18 00:52:55.634805 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.634810 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.634815 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.634821 | orchestrator | 2025-09-18 00:52:55.634829 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-18 00:52:55.634835 | orchestrator | Thursday 18 September 2025 00:45:06 +0000 (0:00:00.386) 0:03:16.782 **** 2025-09-18 00:52:55.634840 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.634846 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.634851 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.634857 | orchestrator | 2025-09-18 00:52:55.634862 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-18 00:52:55.634867 | orchestrator | Thursday 18 September 2025 00:45:07 +0000 (0:00:01.642) 0:03:18.425 **** 2025-09-18 00:52:55.634873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:52:55.634878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:52:55.634884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:52:55.634889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634894 | orchestrator | 2025-09-18 00:52:55.634900 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-18 00:52:55.634905 | orchestrator | Thursday 18 September 2025 00:45:08 +0000 (0:00:00.663) 0:03:19.089 **** 2025-09-18 00:52:55.634910 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.634916 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.634921 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.634927 | orchestrator | 2025-09-18 00:52:55.634932 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-18 00:52:55.634937 | orchestrator | Thursday 18 September 2025 00:45:08 +0000 (0:00:00.298) 0:03:19.387 **** 2025-09-18 00:52:55.634943 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.634951 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.634956 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.634961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.634967 | orchestrator | 2025-09-18 00:52:55.634972 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-18 00:52:55.634978 | orchestrator | Thursday 18 September 2025 00:45:09 +0000 (0:00:01.299) 0:03:20.686 **** 2025-09-18 00:52:55.634983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.634989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.634994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.634999 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635005 | orchestrator | 2025-09-18 00:52:55.635010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-18 00:52:55.635015 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:00.292) 0:03:20.978 **** 2025-09-18 00:52:55.635021 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635026 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.635032 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.635037 | orchestrator | 2025-09-18 00:52:55.635042 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-18 00:52:55.635048 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:00.481) 0:03:21.460 **** 2025-09-18 00:52:55.635053 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635059 | orchestrator | 2025-09-18 00:52:55.635064 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-18 00:52:55.635069 | orchestrator | Thursday 18 September 2025 00:45:10 +0000 (0:00:00.172) 0:03:21.633 **** 2025-09-18 00:52:55.635079 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635084 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.635090 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.635095 | orchestrator | 2025-09-18 00:52:55.635100 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-18 00:52:55.635106 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:00.317) 0:03:21.950 **** 2025-09-18 00:52:55.635111 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635116 | orchestrator | 2025-09-18 00:52:55.635122 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-18 00:52:55.635127 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:00.176) 0:03:22.126 **** 2025-09-18 00:52:55.635133 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635138 | orchestrator | 2025-09-18 00:52:55.635144 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-18 00:52:55.635149 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:00.166) 0:03:22.293 **** 2025-09-18 00:52:55.635154 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635160 | orchestrator | 2025-09-18 00:52:55.635165 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-18 00:52:55.635171 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:00.092) 0:03:22.386 **** 2025-09-18 00:52:55.635176 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635181 | orchestrator | 2025-09-18 00:52:55.635187 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-18 00:52:55.635192 | orchestrator | Thursday 18 September 2025 00:45:11 +0000 (0:00:00.169) 0:03:22.555 **** 2025-09-18 00:52:55.635197 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635203 | orchestrator | 2025-09-18 00:52:55.635208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-18 00:52:55.635213 | orchestrator | Thursday 18 September 2025 00:45:12 +0000 (0:00:00.183) 0:03:22.739 **** 2025-09-18 00:52:55.635219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.635224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.635230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.635235 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635240 | orchestrator | 2025-09-18 00:52:55.635246 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-18 00:52:55.635251 | orchestrator | Thursday 18 September 2025 00:45:12 +0000 (0:00:00.504) 0:03:23.243 **** 2025-09-18 00:52:55.635256 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635265 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.635270 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.635276 | orchestrator | 2025-09-18 00:52:55.635281 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-18 00:52:55.635287 | orchestrator | Thursday 18 September 2025 00:45:13 +0000 (0:00:00.616) 0:03:23.860 **** 2025-09-18 00:52:55.635292 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635308 | orchestrator | 2025-09-18 00:52:55.635313 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-18 00:52:55.635319 | orchestrator | Thursday 18 September 2025 00:45:13 +0000 (0:00:00.176) 0:03:24.036 **** 2025-09-18 00:52:55.635324 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635330 | orchestrator | 2025-09-18 00:52:55.635335 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-18 00:52:55.635341 | orchestrator | Thursday 18 September 2025 00:45:13 +0000 (0:00:00.173) 0:03:24.210 **** 2025-09-18 00:52:55.635346 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.635351 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.635357 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.635362 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.635371 | orchestrator | 2025-09-18 00:52:55.635377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-18 00:52:55.635382 | orchestrator | Thursday 18 September 2025 00:45:14 +0000 (0:00:01.187) 0:03:25.398 **** 2025-09-18 00:52:55.635388 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.635393 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.635398 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.635404 | orchestrator | 2025-09-18 00:52:55.635414 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-18 00:52:55.635419 | orchestrator | Thursday 18 September 2025 00:45:15 +0000 (0:00:00.606) 0:03:26.004 **** 2025-09-18 00:52:55.635425 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.635430 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.635436 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.635441 | orchestrator | 2025-09-18 00:52:55.635446 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-18 00:52:55.635452 | orchestrator | Thursday 18 September 2025 00:45:16 +0000 (0:00:01.395) 0:03:27.399 **** 2025-09-18 00:52:55.635457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.635463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.635468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.635473 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635479 | orchestrator | 2025-09-18 00:52:55.635484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-18 00:52:55.635489 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:00.500) 0:03:27.900 **** 2025-09-18 00:52:55.635495 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.635500 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.635505 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.635511 | orchestrator | 2025-09-18 00:52:55.635516 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-18 00:52:55.635522 | orchestrator | Thursday 18 September 2025 00:45:17 +0000 (0:00:00.349) 0:03:28.250 **** 2025-09-18 00:52:55.635527 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.635532 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.635538 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.635543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.635549 | orchestrator | 2025-09-18 00:52:55.635554 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-18 00:52:55.635559 | orchestrator | Thursday 18 September 2025 00:45:18 +0000 (0:00:00.990) 0:03:29.241 **** 2025-09-18 00:52:55.635565 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.635570 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.635576 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.635581 | orchestrator | 2025-09-18 00:52:55.635586 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-18 00:52:55.635592 | orchestrator | Thursday 18 September 2025 00:45:18 +0000 (0:00:00.252) 0:03:29.493 **** 2025-09-18 00:52:55.635597 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.635603 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.635608 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.635613 | orchestrator | 2025-09-18 00:52:55.635619 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-18 00:52:55.635624 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:01.228) 0:03:30.722 **** 2025-09-18 00:52:55.635630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.635635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.635640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.635646 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635651 | orchestrator | 2025-09-18 00:52:55.635656 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-18 00:52:55.635665 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.590) 0:03:31.312 **** 2025-09-18 00:52:55.635671 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.635676 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.635681 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.635687 | orchestrator | 2025-09-18 00:52:55.635692 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-18 00:52:55.635698 | orchestrator | Thursday 18 September 2025 00:45:20 +0000 (0:00:00.268) 0:03:31.581 **** 2025-09-18 00:52:55.635703 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635708 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.635714 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.635719 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.635724 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.635730 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.635735 | orchestrator | 2025-09-18 00:52:55.635741 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-18 00:52:55.635749 | orchestrator | Thursday 18 September 2025 00:45:21 +0000 (0:00:00.481) 0:03:32.062 **** 2025-09-18 00:52:55.635755 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.635760 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.635766 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.635771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.635777 | orchestrator | 2025-09-18 00:52:55.635782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-18 00:52:55.635787 | orchestrator | Thursday 18 September 2025 00:45:22 +0000 (0:00:00.858) 0:03:32.920 **** 2025-09-18 00:52:55.635793 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.635798 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.635804 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.635809 | orchestrator | 2025-09-18 00:52:55.635814 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-18 00:52:55.635820 | orchestrator | Thursday 18 September 2025 00:45:22 +0000 (0:00:00.276) 0:03:33.196 **** 2025-09-18 00:52:55.635825 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.635831 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.635836 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.635841 | orchestrator | 2025-09-18 00:52:55.635847 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-18 00:52:55.635852 | orchestrator | Thursday 18 September 2025 00:45:23 +0000 (0:00:01.301) 0:03:34.498 **** 2025-09-18 00:52:55.635857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:52:55.635865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:52:55.635871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:52:55.635876 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.635881 | orchestrator | 2025-09-18 00:52:55.635887 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-18 00:52:55.635892 | orchestrator | Thursday 18 September 2025 00:45:24 +0000 (0:00:00.574) 0:03:35.072 **** 2025-09-18 00:52:55.635898 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.635903 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.635908 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.635914 | orchestrator | 2025-09-18 00:52:55.635919 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-18 00:52:55.635925 | orchestrator | 2025-09-18 00:52:55.635930 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.635935 | orchestrator | Thursday 18 September 2025 00:45:24 +0000 (0:00:00.497) 0:03:35.570 **** 2025-09-18 00:52:55.635941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.635946 | orchestrator | 2025-09-18 00:52:55.635955 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.635960 | orchestrator | Thursday 18 September 2025 00:45:25 +0000 (0:00:00.581) 0:03:36.152 **** 2025-09-18 00:52:55.635966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.635971 | orchestrator | 2025-09-18 00:52:55.635977 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.635982 | orchestrator | Thursday 18 September 2025 00:45:25 +0000 (0:00:00.442) 0:03:36.594 **** 2025-09-18 00:52:55.635987 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.635993 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.635998 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636004 | orchestrator | 2025-09-18 00:52:55.636009 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.636014 | orchestrator | Thursday 18 September 2025 00:45:26 +0000 (0:00:00.739) 0:03:37.334 **** 2025-09-18 00:52:55.636020 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636025 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636030 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636036 | orchestrator | 2025-09-18 00:52:55.636041 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.636047 | orchestrator | Thursday 18 September 2025 00:45:27 +0000 (0:00:00.432) 0:03:37.767 **** 2025-09-18 00:52:55.636052 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636057 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636063 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636068 | orchestrator | 2025-09-18 00:52:55.636073 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.636079 | orchestrator | Thursday 18 September 2025 00:45:27 +0000 (0:00:00.249) 0:03:38.016 **** 2025-09-18 00:52:55.636084 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636090 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636095 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636100 | orchestrator | 2025-09-18 00:52:55.636105 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.636111 | orchestrator | Thursday 18 September 2025 00:45:27 +0000 (0:00:00.261) 0:03:38.277 **** 2025-09-18 00:52:55.636116 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636122 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636127 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636132 | orchestrator | 2025-09-18 00:52:55.636138 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.636143 | orchestrator | Thursday 18 September 2025 00:45:28 +0000 (0:00:00.736) 0:03:39.014 **** 2025-09-18 00:52:55.636148 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636154 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636159 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636164 | orchestrator | 2025-09-18 00:52:55.636170 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.636175 | orchestrator | Thursday 18 September 2025 00:45:28 +0000 (0:00:00.262) 0:03:39.276 **** 2025-09-18 00:52:55.636181 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636186 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636192 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636197 | orchestrator | 2025-09-18 00:52:55.636205 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.636211 | orchestrator | Thursday 18 September 2025 00:45:29 +0000 (0:00:00.442) 0:03:39.719 **** 2025-09-18 00:52:55.636216 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636221 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636227 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636232 | orchestrator | 2025-09-18 00:52:55.636238 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.636243 | orchestrator | Thursday 18 September 2025 00:45:29 +0000 (0:00:00.716) 0:03:40.436 **** 2025-09-18 00:52:55.636252 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636258 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636263 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636268 | orchestrator | 2025-09-18 00:52:55.636274 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.636279 | orchestrator | Thursday 18 September 2025 00:45:30 +0000 (0:00:00.692) 0:03:41.129 **** 2025-09-18 00:52:55.636285 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636290 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636306 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636311 | orchestrator | 2025-09-18 00:52:55.636317 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.636322 | orchestrator | Thursday 18 September 2025 00:45:30 +0000 (0:00:00.262) 0:03:41.391 **** 2025-09-18 00:52:55.636328 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636333 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636338 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636344 | orchestrator | 2025-09-18 00:52:55.636349 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.636357 | orchestrator | Thursday 18 September 2025 00:45:31 +0000 (0:00:00.466) 0:03:41.857 **** 2025-09-18 00:52:55.636362 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636368 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636373 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636378 | orchestrator | 2025-09-18 00:52:55.636384 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.636389 | orchestrator | Thursday 18 September 2025 00:45:31 +0000 (0:00:00.272) 0:03:42.130 **** 2025-09-18 00:52:55.636395 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636400 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636405 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636411 | orchestrator | 2025-09-18 00:52:55.636416 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.636422 | orchestrator | Thursday 18 September 2025 00:45:31 +0000 (0:00:00.272) 0:03:42.402 **** 2025-09-18 00:52:55.636427 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636432 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636437 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636443 | orchestrator | 2025-09-18 00:52:55.636448 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.636454 | orchestrator | Thursday 18 September 2025 00:45:31 +0000 (0:00:00.265) 0:03:42.668 **** 2025-09-18 00:52:55.636459 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636464 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636470 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636475 | orchestrator | 2025-09-18 00:52:55.636481 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.636486 | orchestrator | Thursday 18 September 2025 00:45:32 +0000 (0:00:00.486) 0:03:43.154 **** 2025-09-18 00:52:55.636491 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636497 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.636502 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.636507 | orchestrator | 2025-09-18 00:52:55.636513 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.636518 | orchestrator | Thursday 18 September 2025 00:45:32 +0000 (0:00:00.297) 0:03:43.452 **** 2025-09-18 00:52:55.636523 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636529 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636534 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636540 | orchestrator | 2025-09-18 00:52:55.636545 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.636551 | orchestrator | Thursday 18 September 2025 00:45:33 +0000 (0:00:00.378) 0:03:43.831 **** 2025-09-18 00:52:55.636560 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636565 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636571 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636576 | orchestrator | 2025-09-18 00:52:55.636581 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.636587 | orchestrator | Thursday 18 September 2025 00:45:33 +0000 (0:00:00.325) 0:03:44.156 **** 2025-09-18 00:52:55.636592 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636597 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636603 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636608 | orchestrator | 2025-09-18 00:52:55.636613 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-18 00:52:55.636619 | orchestrator | Thursday 18 September 2025 00:45:34 +0000 (0:00:00.645) 0:03:44.801 **** 2025-09-18 00:52:55.636624 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636630 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636635 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636640 | orchestrator | 2025-09-18 00:52:55.636646 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-18 00:52:55.636651 | orchestrator | Thursday 18 September 2025 00:45:34 +0000 (0:00:00.305) 0:03:45.107 **** 2025-09-18 00:52:55.636656 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.636662 | orchestrator | 2025-09-18 00:52:55.636667 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-18 00:52:55.636673 | orchestrator | Thursday 18 September 2025 00:45:34 +0000 (0:00:00.525) 0:03:45.632 **** 2025-09-18 00:52:55.636678 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.636683 | orchestrator | 2025-09-18 00:52:55.636689 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-18 00:52:55.636697 | orchestrator | Thursday 18 September 2025 00:45:35 +0000 (0:00:00.282) 0:03:45.915 **** 2025-09-18 00:52:55.636703 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-18 00:52:55.636708 | orchestrator | 2025-09-18 00:52:55.636714 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-18 00:52:55.636719 | orchestrator | Thursday 18 September 2025 00:45:36 +0000 (0:00:00.964) 0:03:46.880 **** 2025-09-18 00:52:55.636725 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636730 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636736 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636741 | orchestrator | 2025-09-18 00:52:55.636746 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-18 00:52:55.636752 | orchestrator | Thursday 18 September 2025 00:45:36 +0000 (0:00:00.442) 0:03:47.322 **** 2025-09-18 00:52:55.636757 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636763 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636768 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636773 | orchestrator | 2025-09-18 00:52:55.636779 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-18 00:52:55.636784 | orchestrator | Thursday 18 September 2025 00:45:36 +0000 (0:00:00.314) 0:03:47.636 **** 2025-09-18 00:52:55.636790 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.636795 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.636800 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.636806 | orchestrator | 2025-09-18 00:52:55.636811 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-18 00:52:55.636817 | orchestrator | Thursday 18 September 2025 00:45:38 +0000 (0:00:01.251) 0:03:48.888 **** 2025-09-18 00:52:55.636822 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.636827 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.636833 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.636838 | orchestrator | 2025-09-18 00:52:55.636846 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-18 00:52:55.636851 | orchestrator | Thursday 18 September 2025 00:45:39 +0000 (0:00:01.151) 0:03:50.039 **** 2025-09-18 00:52:55.636860 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.636866 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.636871 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.636876 | orchestrator | 2025-09-18 00:52:55.636882 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-18 00:52:55.636887 | orchestrator | Thursday 18 September 2025 00:45:40 +0000 (0:00:00.744) 0:03:50.784 **** 2025-09-18 00:52:55.636892 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636898 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.636903 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.636908 | orchestrator | 2025-09-18 00:52:55.636914 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-18 00:52:55.636919 | orchestrator | Thursday 18 September 2025 00:45:40 +0000 (0:00:00.678) 0:03:51.463 **** 2025-09-18 00:52:55.636925 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.636930 | orchestrator | 2025-09-18 00:52:55.636936 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-18 00:52:55.636941 | orchestrator | Thursday 18 September 2025 00:45:42 +0000 (0:00:01.258) 0:03:52.722 **** 2025-09-18 00:52:55.636946 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.636952 | orchestrator | 2025-09-18 00:52:55.636957 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-18 00:52:55.636962 | orchestrator | Thursday 18 September 2025 00:45:42 +0000 (0:00:00.726) 0:03:53.449 **** 2025-09-18 00:52:55.636968 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.636973 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.636979 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.636984 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:52:55.636989 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-18 00:52:55.636995 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:52:55.637000 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:52:55.637005 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-18 00:52:55.637011 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:52:55.637016 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-18 00:52:55.637022 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-18 00:52:55.637027 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-18 00:52:55.637032 | orchestrator | 2025-09-18 00:52:55.637038 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-18 00:52:55.637043 | orchestrator | Thursday 18 September 2025 00:45:46 +0000 (0:00:03.701) 0:03:57.150 **** 2025-09-18 00:52:55.637048 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637054 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637059 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637064 | orchestrator | 2025-09-18 00:52:55.637070 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-18 00:52:55.637075 | orchestrator | Thursday 18 September 2025 00:45:47 +0000 (0:00:01.478) 0:03:58.629 **** 2025-09-18 00:52:55.637081 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637086 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637091 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637097 | orchestrator | 2025-09-18 00:52:55.637102 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-18 00:52:55.637108 | orchestrator | Thursday 18 September 2025 00:45:48 +0000 (0:00:00.339) 0:03:58.968 **** 2025-09-18 00:52:55.637113 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637118 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637124 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637129 | orchestrator | 2025-09-18 00:52:55.637135 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-18 00:52:55.637143 | orchestrator | Thursday 18 September 2025 00:45:48 +0000 (0:00:00.335) 0:03:59.304 **** 2025-09-18 00:52:55.637149 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637154 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637160 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637165 | orchestrator | 2025-09-18 00:52:55.637173 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-18 00:52:55.637179 | orchestrator | Thursday 18 September 2025 00:45:50 +0000 (0:00:01.714) 0:04:01.018 **** 2025-09-18 00:52:55.637184 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637190 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637195 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637201 | orchestrator | 2025-09-18 00:52:55.637206 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-18 00:52:55.637211 | orchestrator | Thursday 18 September 2025 00:45:51 +0000 (0:00:01.599) 0:04:02.618 **** 2025-09-18 00:52:55.637217 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637222 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637227 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637233 | orchestrator | 2025-09-18 00:52:55.637238 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-18 00:52:55.637244 | orchestrator | Thursday 18 September 2025 00:45:52 +0000 (0:00:00.319) 0:04:02.938 **** 2025-09-18 00:52:55.637249 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.637254 | orchestrator | 2025-09-18 00:52:55.637260 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-18 00:52:55.637265 | orchestrator | Thursday 18 September 2025 00:45:52 +0000 (0:00:00.600) 0:04:03.539 **** 2025-09-18 00:52:55.637271 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637276 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637281 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637287 | orchestrator | 2025-09-18 00:52:55.637320 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-18 00:52:55.637326 | orchestrator | Thursday 18 September 2025 00:45:53 +0000 (0:00:00.618) 0:04:04.157 **** 2025-09-18 00:52:55.637332 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637337 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637343 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637348 | orchestrator | 2025-09-18 00:52:55.637354 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-18 00:52:55.637359 | orchestrator | Thursday 18 September 2025 00:45:53 +0000 (0:00:00.317) 0:04:04.475 **** 2025-09-18 00:52:55.637364 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.637370 | orchestrator | 2025-09-18 00:52:55.637375 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-18 00:52:55.637381 | orchestrator | Thursday 18 September 2025 00:45:54 +0000 (0:00:00.522) 0:04:04.997 **** 2025-09-18 00:52:55.637386 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637392 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637397 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637402 | orchestrator | 2025-09-18 00:52:55.637408 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-18 00:52:55.637413 | orchestrator | Thursday 18 September 2025 00:45:56 +0000 (0:00:02.614) 0:04:07.611 **** 2025-09-18 00:52:55.637419 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637424 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637429 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637435 | orchestrator | 2025-09-18 00:52:55.637440 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-18 00:52:55.637446 | orchestrator | Thursday 18 September 2025 00:45:58 +0000 (0:00:01.240) 0:04:08.852 **** 2025-09-18 00:52:55.637451 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637460 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637465 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637471 | orchestrator | 2025-09-18 00:52:55.637476 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-18 00:52:55.637481 | orchestrator | Thursday 18 September 2025 00:45:59 +0000 (0:00:01.710) 0:04:10.562 **** 2025-09-18 00:52:55.637487 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.637492 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.637498 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.637503 | orchestrator | 2025-09-18 00:52:55.637508 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-18 00:52:55.637514 | orchestrator | Thursday 18 September 2025 00:46:01 +0000 (0:00:02.058) 0:04:12.621 **** 2025-09-18 00:52:55.637519 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-18 00:52:55.637525 | orchestrator | 2025-09-18 00:52:55.637530 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-18 00:52:55.637535 | orchestrator | Thursday 18 September 2025 00:46:02 +0000 (0:00:01.007) 0:04:13.629 **** 2025-09-18 00:52:55.637541 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-18 00:52:55.637546 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637552 | orchestrator | 2025-09-18 00:52:55.637557 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-18 00:52:55.637562 | orchestrator | Thursday 18 September 2025 00:46:24 +0000 (0:00:21.988) 0:04:35.618 **** 2025-09-18 00:52:55.637568 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637573 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637579 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637584 | orchestrator | 2025-09-18 00:52:55.637589 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-18 00:52:55.637595 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:09.604) 0:04:45.222 **** 2025-09-18 00:52:55.637600 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637605 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637611 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637616 | orchestrator | 2025-09-18 00:52:55.637621 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-18 00:52:55.637627 | orchestrator | Thursday 18 September 2025 00:46:34 +0000 (0:00:00.303) 0:04:45.525 **** 2025-09-18 00:52:55.637636 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-18 00:52:55.637643 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-18 00:52:55.637649 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-18 00:52:55.637659 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-18 00:52:55.637668 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-18 00:52:55.637674 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__87935370b02fa393b71b695bca33148387d1f434'}])  2025-09-18 00:52:55.637680 | orchestrator | 2025-09-18 00:52:55.637686 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.637691 | orchestrator | Thursday 18 September 2025 00:46:49 +0000 (0:00:14.887) 0:05:00.413 **** 2025-09-18 00:52:55.637697 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637702 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637708 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637713 | orchestrator | 2025-09-18 00:52:55.637718 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-18 00:52:55.637724 | orchestrator | Thursday 18 September 2025 00:46:50 +0000 (0:00:00.416) 0:05:00.829 **** 2025-09-18 00:52:55.637729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-18 00:52:55.637735 | orchestrator | 2025-09-18 00:52:55.637740 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-18 00:52:55.637745 | orchestrator | Thursday 18 September 2025 00:46:50 +0000 (0:00:00.845) 0:05:01.675 **** 2025-09-18 00:52:55.637751 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637756 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637762 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637767 | orchestrator | 2025-09-18 00:52:55.637772 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-18 00:52:55.637778 | orchestrator | Thursday 18 September 2025 00:46:51 +0000 (0:00:00.330) 0:05:02.006 **** 2025-09-18 00:52:55.637783 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637788 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637794 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637799 | orchestrator | 2025-09-18 00:52:55.637804 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-18 00:52:55.637810 | orchestrator | Thursday 18 September 2025 00:46:51 +0000 (0:00:00.352) 0:05:02.358 **** 2025-09-18 00:52:55.637815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:52:55.637821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:52:55.637826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:52:55.637831 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637837 | orchestrator | 2025-09-18 00:52:55.637842 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-18 00:52:55.637847 | orchestrator | Thursday 18 September 2025 00:46:52 +0000 (0:00:00.664) 0:05:03.022 **** 2025-09-18 00:52:55.637853 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637858 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637863 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637868 | orchestrator | 2025-09-18 00:52:55.637875 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-18 00:52:55.637881 | orchestrator | 2025-09-18 00:52:55.637885 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.637890 | orchestrator | Thursday 18 September 2025 00:46:53 +0000 (0:00:00.824) 0:05:03.847 **** 2025-09-18 00:52:55.637898 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.637903 | orchestrator | 2025-09-18 00:52:55.637908 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.637913 | orchestrator | Thursday 18 September 2025 00:46:53 +0000 (0:00:00.551) 0:05:04.399 **** 2025-09-18 00:52:55.637917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.637922 | orchestrator | 2025-09-18 00:52:55.637927 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.637932 | orchestrator | Thursday 18 September 2025 00:46:54 +0000 (0:00:00.544) 0:05:04.943 **** 2025-09-18 00:52:55.637937 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.637941 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.637946 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.637951 | orchestrator | 2025-09-18 00:52:55.637956 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.637961 | orchestrator | Thursday 18 September 2025 00:46:55 +0000 (0:00:01.011) 0:05:05.955 **** 2025-09-18 00:52:55.637966 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.637971 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.637979 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.637984 | orchestrator | 2025-09-18 00:52:55.637989 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.637994 | orchestrator | Thursday 18 September 2025 00:46:55 +0000 (0:00:00.336) 0:05:06.292 **** 2025-09-18 00:52:55.637998 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638003 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638008 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638013 | orchestrator | 2025-09-18 00:52:55.638081 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.638087 | orchestrator | Thursday 18 September 2025 00:46:55 +0000 (0:00:00.361) 0:05:06.653 **** 2025-09-18 00:52:55.638091 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638096 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638101 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638106 | orchestrator | 2025-09-18 00:52:55.638111 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.638115 | orchestrator | Thursday 18 September 2025 00:46:56 +0000 (0:00:00.312) 0:05:06.965 **** 2025-09-18 00:52:55.638120 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638125 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638130 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638135 | orchestrator | 2025-09-18 00:52:55.638139 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.638144 | orchestrator | Thursday 18 September 2025 00:46:57 +0000 (0:00:01.054) 0:05:08.020 **** 2025-09-18 00:52:55.638149 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638154 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638159 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638163 | orchestrator | 2025-09-18 00:52:55.638168 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.638173 | orchestrator | Thursday 18 September 2025 00:46:57 +0000 (0:00:00.324) 0:05:08.345 **** 2025-09-18 00:52:55.638178 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638183 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638187 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638192 | orchestrator | 2025-09-18 00:52:55.638197 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.638202 | orchestrator | Thursday 18 September 2025 00:46:57 +0000 (0:00:00.312) 0:05:08.658 **** 2025-09-18 00:52:55.638207 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638211 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638220 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638225 | orchestrator | 2025-09-18 00:52:55.638230 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.638234 | orchestrator | Thursday 18 September 2025 00:46:58 +0000 (0:00:00.772) 0:05:09.430 **** 2025-09-18 00:52:55.638239 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638244 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638249 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638254 | orchestrator | 2025-09-18 00:52:55.638258 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.638263 | orchestrator | Thursday 18 September 2025 00:46:59 +0000 (0:00:01.058) 0:05:10.488 **** 2025-09-18 00:52:55.638268 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638273 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638278 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638282 | orchestrator | 2025-09-18 00:52:55.638287 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.638292 | orchestrator | Thursday 18 September 2025 00:47:00 +0000 (0:00:00.309) 0:05:10.798 **** 2025-09-18 00:52:55.638307 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638312 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638317 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638322 | orchestrator | 2025-09-18 00:52:55.638326 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.638331 | orchestrator | Thursday 18 September 2025 00:47:00 +0000 (0:00:00.349) 0:05:11.147 **** 2025-09-18 00:52:55.638336 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638341 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638346 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638350 | orchestrator | 2025-09-18 00:52:55.638355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.638360 | orchestrator | Thursday 18 September 2025 00:47:00 +0000 (0:00:00.330) 0:05:11.478 **** 2025-09-18 00:52:55.638365 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638370 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638392 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638398 | orchestrator | 2025-09-18 00:52:55.638403 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.638408 | orchestrator | Thursday 18 September 2025 00:47:01 +0000 (0:00:00.612) 0:05:12.091 **** 2025-09-18 00:52:55.638413 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638417 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638422 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638427 | orchestrator | 2025-09-18 00:52:55.638431 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.638436 | orchestrator | Thursday 18 September 2025 00:47:01 +0000 (0:00:00.328) 0:05:12.420 **** 2025-09-18 00:52:55.638441 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638446 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638450 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638455 | orchestrator | 2025-09-18 00:52:55.638460 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.638465 | orchestrator | Thursday 18 September 2025 00:47:02 +0000 (0:00:00.329) 0:05:12.750 **** 2025-09-18 00:52:55.638469 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638474 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638479 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638484 | orchestrator | 2025-09-18 00:52:55.638488 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.638493 | orchestrator | Thursday 18 September 2025 00:47:02 +0000 (0:00:00.332) 0:05:13.083 **** 2025-09-18 00:52:55.638498 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638503 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638507 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638516 | orchestrator | 2025-09-18 00:52:55.638523 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.638528 | orchestrator | Thursday 18 September 2025 00:47:02 +0000 (0:00:00.385) 0:05:13.468 **** 2025-09-18 00:52:55.638533 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638538 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638543 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638547 | orchestrator | 2025-09-18 00:52:55.638552 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.638557 | orchestrator | Thursday 18 September 2025 00:47:03 +0000 (0:00:00.784) 0:05:14.253 **** 2025-09-18 00:52:55.638562 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638566 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638571 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638576 | orchestrator | 2025-09-18 00:52:55.638580 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-18 00:52:55.638585 | orchestrator | Thursday 18 September 2025 00:47:04 +0000 (0:00:00.593) 0:05:14.846 **** 2025-09-18 00:52:55.638590 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 00:52:55.638595 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.638599 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.638604 | orchestrator | 2025-09-18 00:52:55.638609 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-18 00:52:55.638614 | orchestrator | Thursday 18 September 2025 00:47:05 +0000 (0:00:00.958) 0:05:15.804 **** 2025-09-18 00:52:55.638618 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.638623 | orchestrator | 2025-09-18 00:52:55.638628 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-18 00:52:55.638633 | orchestrator | Thursday 18 September 2025 00:47:05 +0000 (0:00:00.826) 0:05:16.631 **** 2025-09-18 00:52:55.638637 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.638642 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.638647 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.638651 | orchestrator | 2025-09-18 00:52:55.638656 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-18 00:52:55.638661 | orchestrator | Thursday 18 September 2025 00:47:07 +0000 (0:00:01.138) 0:05:17.769 **** 2025-09-18 00:52:55.638666 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638670 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638675 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638680 | orchestrator | 2025-09-18 00:52:55.638685 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-18 00:52:55.638689 | orchestrator | Thursday 18 September 2025 00:47:07 +0000 (0:00:00.336) 0:05:18.105 **** 2025-09-18 00:52:55.638694 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.638699 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.638704 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.638708 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-18 00:52:55.638713 | orchestrator | 2025-09-18 00:52:55.638718 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-18 00:52:55.638722 | orchestrator | Thursday 18 September 2025 00:47:18 +0000 (0:00:10.645) 0:05:28.751 **** 2025-09-18 00:52:55.638727 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638732 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638737 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638741 | orchestrator | 2025-09-18 00:52:55.638746 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-18 00:52:55.638751 | orchestrator | Thursday 18 September 2025 00:47:18 +0000 (0:00:00.635) 0:05:29.386 **** 2025-09-18 00:52:55.638756 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 00:52:55.638764 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 00:52:55.638768 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 00:52:55.638773 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.638778 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.638783 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.638787 | orchestrator | 2025-09-18 00:52:55.638807 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-18 00:52:55.638813 | orchestrator | Thursday 18 September 2025 00:47:20 +0000 (0:00:02.244) 0:05:31.630 **** 2025-09-18 00:52:55.638818 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 00:52:55.638823 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 00:52:55.638827 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 00:52:55.638832 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 00:52:55.638837 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-18 00:52:55.638842 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-18 00:52:55.638846 | orchestrator | 2025-09-18 00:52:55.638851 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-18 00:52:55.638856 | orchestrator | Thursday 18 September 2025 00:47:22 +0000 (0:00:01.373) 0:05:33.004 **** 2025-09-18 00:52:55.638861 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.638865 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.638870 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.638875 | orchestrator | 2025-09-18 00:52:55.638880 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-18 00:52:55.638884 | orchestrator | Thursday 18 September 2025 00:47:22 +0000 (0:00:00.675) 0:05:33.679 **** 2025-09-18 00:52:55.638889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638894 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638899 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638904 | orchestrator | 2025-09-18 00:52:55.638908 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-18 00:52:55.638913 | orchestrator | Thursday 18 September 2025 00:47:23 +0000 (0:00:00.587) 0:05:34.267 **** 2025-09-18 00:52:55.638921 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638926 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638931 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638936 | orchestrator | 2025-09-18 00:52:55.638940 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-18 00:52:55.638945 | orchestrator | Thursday 18 September 2025 00:47:23 +0000 (0:00:00.363) 0:05:34.631 **** 2025-09-18 00:52:55.638950 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.638955 | orchestrator | 2025-09-18 00:52:55.638959 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-18 00:52:55.638964 | orchestrator | Thursday 18 September 2025 00:47:24 +0000 (0:00:00.601) 0:05:35.232 **** 2025-09-18 00:52:55.638969 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.638974 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.638978 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.638983 | orchestrator | 2025-09-18 00:52:55.638988 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-18 00:52:55.638993 | orchestrator | Thursday 18 September 2025 00:47:24 +0000 (0:00:00.321) 0:05:35.554 **** 2025-09-18 00:52:55.638997 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.639002 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.639007 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.639012 | orchestrator | 2025-09-18 00:52:55.639016 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-18 00:52:55.639021 | orchestrator | Thursday 18 September 2025 00:47:25 +0000 (0:00:00.611) 0:05:36.165 **** 2025-09-18 00:52:55.639030 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.639035 | orchestrator | 2025-09-18 00:52:55.639040 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-18 00:52:55.639044 | orchestrator | Thursday 18 September 2025 00:47:25 +0000 (0:00:00.530) 0:05:36.697 **** 2025-09-18 00:52:55.639049 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639054 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639059 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639064 | orchestrator | 2025-09-18 00:52:55.639068 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-18 00:52:55.639073 | orchestrator | Thursday 18 September 2025 00:47:27 +0000 (0:00:01.200) 0:05:37.897 **** 2025-09-18 00:52:55.639078 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639083 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639087 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639092 | orchestrator | 2025-09-18 00:52:55.639097 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-18 00:52:55.639102 | orchestrator | Thursday 18 September 2025 00:47:28 +0000 (0:00:01.518) 0:05:39.415 **** 2025-09-18 00:52:55.639106 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639111 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639116 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639121 | orchestrator | 2025-09-18 00:52:55.639125 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-18 00:52:55.639130 | orchestrator | Thursday 18 September 2025 00:47:30 +0000 (0:00:01.795) 0:05:41.211 **** 2025-09-18 00:52:55.639135 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639140 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639144 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639149 | orchestrator | 2025-09-18 00:52:55.639154 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-18 00:52:55.639159 | orchestrator | Thursday 18 September 2025 00:47:33 +0000 (0:00:03.116) 0:05:44.327 **** 2025-09-18 00:52:55.639163 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.639168 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.639173 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-18 00:52:55.639178 | orchestrator | 2025-09-18 00:52:55.639182 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-18 00:52:55.639187 | orchestrator | Thursday 18 September 2025 00:47:34 +0000 (0:00:00.418) 0:05:44.746 **** 2025-09-18 00:52:55.639192 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-18 00:52:55.639211 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-18 00:52:55.639216 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-18 00:52:55.639221 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-18 00:52:55.639226 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-18 00:52:55.639231 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.639236 | orchestrator | 2025-09-18 00:52:55.639241 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-18 00:52:55.639245 | orchestrator | Thursday 18 September 2025 00:48:05 +0000 (0:00:30.969) 0:06:15.716 **** 2025-09-18 00:52:55.639250 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.639255 | orchestrator | 2025-09-18 00:52:55.639259 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-18 00:52:55.639264 | orchestrator | Thursday 18 September 2025 00:48:06 +0000 (0:00:01.367) 0:06:17.084 **** 2025-09-18 00:52:55.639269 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.639326 | orchestrator | 2025-09-18 00:52:55.639331 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-18 00:52:55.639336 | orchestrator | Thursday 18 September 2025 00:48:06 +0000 (0:00:00.379) 0:06:17.463 **** 2025-09-18 00:52:55.639341 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.639345 | orchestrator | 2025-09-18 00:52:55.639353 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-18 00:52:55.639358 | orchestrator | Thursday 18 September 2025 00:48:06 +0000 (0:00:00.171) 0:06:17.634 **** 2025-09-18 00:52:55.639363 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-18 00:52:55.639368 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-18 00:52:55.639372 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-18 00:52:55.639377 | orchestrator | 2025-09-18 00:52:55.639382 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-18 00:52:55.639387 | orchestrator | Thursday 18 September 2025 00:48:13 +0000 (0:00:06.473) 0:06:24.108 **** 2025-09-18 00:52:55.639391 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-18 00:52:55.639396 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-18 00:52:55.639401 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-18 00:52:55.639405 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-18 00:52:55.639410 | orchestrator | 2025-09-18 00:52:55.639415 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.639420 | orchestrator | Thursday 18 September 2025 00:48:18 +0000 (0:00:04.716) 0:06:28.825 **** 2025-09-18 00:52:55.639424 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639429 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639434 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639438 | orchestrator | 2025-09-18 00:52:55.639443 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-18 00:52:55.639448 | orchestrator | Thursday 18 September 2025 00:48:19 +0000 (0:00:01.007) 0:06:29.833 **** 2025-09-18 00:52:55.639453 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.639457 | orchestrator | 2025-09-18 00:52:55.639462 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-18 00:52:55.639467 | orchestrator | Thursday 18 September 2025 00:48:19 +0000 (0:00:00.563) 0:06:30.397 **** 2025-09-18 00:52:55.639472 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.639476 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.639481 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.639486 | orchestrator | 2025-09-18 00:52:55.639490 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-18 00:52:55.639495 | orchestrator | Thursday 18 September 2025 00:48:19 +0000 (0:00:00.305) 0:06:30.702 **** 2025-09-18 00:52:55.639500 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.639505 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.639509 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.639514 | orchestrator | 2025-09-18 00:52:55.639519 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-18 00:52:55.639524 | orchestrator | Thursday 18 September 2025 00:48:21 +0000 (0:00:01.485) 0:06:32.188 **** 2025-09-18 00:52:55.639528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 00:52:55.639533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 00:52:55.639538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 00:52:55.639543 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.639547 | orchestrator | 2025-09-18 00:52:55.639552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-18 00:52:55.639557 | orchestrator | Thursday 18 September 2025 00:48:22 +0000 (0:00:00.601) 0:06:32.789 **** 2025-09-18 00:52:55.639565 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.639569 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.639574 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.639579 | orchestrator | 2025-09-18 00:52:55.639583 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-18 00:52:55.639588 | orchestrator | 2025-09-18 00:52:55.639593 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.639598 | orchestrator | Thursday 18 September 2025 00:48:22 +0000 (0:00:00.485) 0:06:33.275 **** 2025-09-18 00:52:55.639603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.639607 | orchestrator | 2025-09-18 00:52:55.639629 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.639634 | orchestrator | Thursday 18 September 2025 00:48:23 +0000 (0:00:00.526) 0:06:33.801 **** 2025-09-18 00:52:55.639639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.639644 | orchestrator | 2025-09-18 00:52:55.639649 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.639653 | orchestrator | Thursday 18 September 2025 00:48:23 +0000 (0:00:00.434) 0:06:34.236 **** 2025-09-18 00:52:55.639658 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639663 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639668 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639672 | orchestrator | 2025-09-18 00:52:55.639677 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.639682 | orchestrator | Thursday 18 September 2025 00:48:23 +0000 (0:00:00.283) 0:06:34.519 **** 2025-09-18 00:52:55.639686 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639691 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639696 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639701 | orchestrator | 2025-09-18 00:52:55.639705 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.639710 | orchestrator | Thursday 18 September 2025 00:48:24 +0000 (0:00:00.836) 0:06:35.356 **** 2025-09-18 00:52:55.639715 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639720 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639724 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639729 | orchestrator | 2025-09-18 00:52:55.639737 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.639741 | orchestrator | Thursday 18 September 2025 00:48:25 +0000 (0:00:00.624) 0:06:35.980 **** 2025-09-18 00:52:55.639746 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639751 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639756 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639760 | orchestrator | 2025-09-18 00:52:55.639765 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.639770 | orchestrator | Thursday 18 September 2025 00:48:25 +0000 (0:00:00.647) 0:06:36.628 **** 2025-09-18 00:52:55.639775 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639779 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639784 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639789 | orchestrator | 2025-09-18 00:52:55.639794 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.639798 | orchestrator | Thursday 18 September 2025 00:48:26 +0000 (0:00:00.266) 0:06:36.895 **** 2025-09-18 00:52:55.639803 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639808 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639813 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639817 | orchestrator | 2025-09-18 00:52:55.639822 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.639827 | orchestrator | Thursday 18 September 2025 00:48:26 +0000 (0:00:00.447) 0:06:37.342 **** 2025-09-18 00:52:55.639835 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639840 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639844 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639849 | orchestrator | 2025-09-18 00:52:55.639854 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.639859 | orchestrator | Thursday 18 September 2025 00:48:26 +0000 (0:00:00.267) 0:06:37.610 **** 2025-09-18 00:52:55.639863 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639868 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639873 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639878 | orchestrator | 2025-09-18 00:52:55.639882 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.639887 | orchestrator | Thursday 18 September 2025 00:48:27 +0000 (0:00:00.615) 0:06:38.226 **** 2025-09-18 00:52:55.639892 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639896 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639901 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639906 | orchestrator | 2025-09-18 00:52:55.639910 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.639915 | orchestrator | Thursday 18 September 2025 00:48:28 +0000 (0:00:00.626) 0:06:38.852 **** 2025-09-18 00:52:55.639920 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639925 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639929 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639934 | orchestrator | 2025-09-18 00:52:55.639939 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.639944 | orchestrator | Thursday 18 September 2025 00:48:28 +0000 (0:00:00.461) 0:06:39.313 **** 2025-09-18 00:52:55.639948 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.639953 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.639958 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.639962 | orchestrator | 2025-09-18 00:52:55.639967 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.639972 | orchestrator | Thursday 18 September 2025 00:48:28 +0000 (0:00:00.317) 0:06:39.631 **** 2025-09-18 00:52:55.639977 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.639981 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.639986 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.639991 | orchestrator | 2025-09-18 00:52:55.639996 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.640000 | orchestrator | Thursday 18 September 2025 00:48:29 +0000 (0:00:00.331) 0:06:39.963 **** 2025-09-18 00:52:55.640005 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640010 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640014 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640019 | orchestrator | 2025-09-18 00:52:55.640024 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.640029 | orchestrator | Thursday 18 September 2025 00:48:29 +0000 (0:00:00.301) 0:06:40.264 **** 2025-09-18 00:52:55.640033 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640038 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640043 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640047 | orchestrator | 2025-09-18 00:52:55.640055 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.640060 | orchestrator | Thursday 18 September 2025 00:48:30 +0000 (0:00:00.669) 0:06:40.934 **** 2025-09-18 00:52:55.640065 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640069 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640074 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640079 | orchestrator | 2025-09-18 00:52:55.640084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.640088 | orchestrator | Thursday 18 September 2025 00:48:30 +0000 (0:00:00.354) 0:06:41.289 **** 2025-09-18 00:52:55.640093 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640101 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640106 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640110 | orchestrator | 2025-09-18 00:52:55.640115 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.640120 | orchestrator | Thursday 18 September 2025 00:48:30 +0000 (0:00:00.305) 0:06:41.594 **** 2025-09-18 00:52:55.640125 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640130 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640134 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640139 | orchestrator | 2025-09-18 00:52:55.640144 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.640149 | orchestrator | Thursday 18 September 2025 00:48:31 +0000 (0:00:00.308) 0:06:41.903 **** 2025-09-18 00:52:55.640153 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640158 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640163 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640168 | orchestrator | 2025-09-18 00:52:55.640172 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.640179 | orchestrator | Thursday 18 September 2025 00:48:31 +0000 (0:00:00.645) 0:06:42.548 **** 2025-09-18 00:52:55.640184 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640189 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640194 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640199 | orchestrator | 2025-09-18 00:52:55.640204 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-18 00:52:55.640208 | orchestrator | Thursday 18 September 2025 00:48:32 +0000 (0:00:00.455) 0:06:43.004 **** 2025-09-18 00:52:55.640213 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640218 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640223 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640228 | orchestrator | 2025-09-18 00:52:55.640232 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-18 00:52:55.640237 | orchestrator | Thursday 18 September 2025 00:48:32 +0000 (0:00:00.225) 0:06:43.230 **** 2025-09-18 00:52:55.640242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:52:55.640247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:52:55.640251 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:52:55.640256 | orchestrator | 2025-09-18 00:52:55.640261 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-18 00:52:55.640266 | orchestrator | Thursday 18 September 2025 00:48:33 +0000 (0:00:00.647) 0:06:43.877 **** 2025-09-18 00:52:55.640270 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.640275 | orchestrator | 2025-09-18 00:52:55.640280 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-18 00:52:55.640285 | orchestrator | Thursday 18 September 2025 00:48:33 +0000 (0:00:00.593) 0:06:44.470 **** 2025-09-18 00:52:55.640290 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640319 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640325 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640330 | orchestrator | 2025-09-18 00:52:55.640335 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-18 00:52:55.640339 | orchestrator | Thursday 18 September 2025 00:48:34 +0000 (0:00:00.254) 0:06:44.725 **** 2025-09-18 00:52:55.640344 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640349 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640354 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640358 | orchestrator | 2025-09-18 00:52:55.640363 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-18 00:52:55.640368 | orchestrator | Thursday 18 September 2025 00:48:34 +0000 (0:00:00.281) 0:06:45.006 **** 2025-09-18 00:52:55.640372 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640381 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640386 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640391 | orchestrator | 2025-09-18 00:52:55.640395 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-18 00:52:55.640400 | orchestrator | Thursday 18 September 2025 00:48:35 +0000 (0:00:00.770) 0:06:45.777 **** 2025-09-18 00:52:55.640405 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640410 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640414 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640419 | orchestrator | 2025-09-18 00:52:55.640424 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-18 00:52:55.640429 | orchestrator | Thursday 18 September 2025 00:48:35 +0000 (0:00:00.278) 0:06:46.056 **** 2025-09-18 00:52:55.640433 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 00:52:55.640438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 00:52:55.640443 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 00:52:55.640448 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 00:52:55.640453 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 00:52:55.640457 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 00:52:55.640466 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 00:52:55.640471 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 00:52:55.640475 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 00:52:55.640480 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 00:52:55.640485 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 00:52:55.640490 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 00:52:55.640494 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 00:52:55.640499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 00:52:55.640504 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 00:52:55.640509 | orchestrator | 2025-09-18 00:52:55.640513 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-18 00:52:55.640518 | orchestrator | Thursday 18 September 2025 00:48:40 +0000 (0:00:05.002) 0:06:51.058 **** 2025-09-18 00:52:55.640523 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640528 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640533 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640537 | orchestrator | 2025-09-18 00:52:55.640547 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-18 00:52:55.640552 | orchestrator | Thursday 18 September 2025 00:48:40 +0000 (0:00:00.298) 0:06:51.356 **** 2025-09-18 00:52:55.640556 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.640561 | orchestrator | 2025-09-18 00:52:55.640566 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-18 00:52:55.640571 | orchestrator | Thursday 18 September 2025 00:48:41 +0000 (0:00:00.815) 0:06:52.172 **** 2025-09-18 00:52:55.640576 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 00:52:55.640580 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 00:52:55.640585 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 00:52:55.640590 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-18 00:52:55.640598 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-18 00:52:55.640603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-18 00:52:55.640608 | orchestrator | 2025-09-18 00:52:55.640612 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-18 00:52:55.640617 | orchestrator | Thursday 18 September 2025 00:48:42 +0000 (0:00:01.235) 0:06:53.408 **** 2025-09-18 00:52:55.640622 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.640627 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.640632 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.640636 | orchestrator | 2025-09-18 00:52:55.640641 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-18 00:52:55.640646 | orchestrator | Thursday 18 September 2025 00:48:44 +0000 (0:00:02.217) 0:06:55.625 **** 2025-09-18 00:52:55.640651 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 00:52:55.640656 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.640660 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.640665 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 00:52:55.640670 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 00:52:55.640675 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.640679 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 00:52:55.640684 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 00:52:55.640689 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.640693 | orchestrator | 2025-09-18 00:52:55.640698 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-18 00:52:55.640703 | orchestrator | Thursday 18 September 2025 00:48:46 +0000 (0:00:01.526) 0:06:57.151 **** 2025-09-18 00:52:55.640708 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.640712 | orchestrator | 2025-09-18 00:52:55.640717 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-18 00:52:55.640722 | orchestrator | Thursday 18 September 2025 00:48:48 +0000 (0:00:02.224) 0:06:59.376 **** 2025-09-18 00:52:55.640727 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-5, testbed-node-4 2025-09-18 00:52:55.640732 | orchestrator | 2025-09-18 00:52:55.640736 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-18 00:52:55.640741 | orchestrator | Thursday 18 September 2025 00:48:49 +0000 (0:00:00.574) 0:06:59.951 **** 2025-09-18 00:52:55.640746 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cde6920-619d-54be-8750-7c50463ca655', 'data_vg': 'ceph-0cde6920-619d-54be-8750-7c50463ca655'}) 2025-09-18 00:52:55.640751 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b959ef4-2353-55d9-9e37-ea43ed82416b', 'data_vg': 'ceph-7b959ef4-2353-55d9-9e37-ea43ed82416b'}) 2025-09-18 00:52:55.640756 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07829316-95ed-5d0c-8777-c74850e385f5', 'data_vg': 'ceph-07829316-95ed-5d0c-8777-c74850e385f5'}) 2025-09-18 00:52:55.640764 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ac78a0a-4049-5f74-bf32-d6052d628b7d', 'data_vg': 'ceph-3ac78a0a-4049-5f74-bf32-d6052d628b7d'}) 2025-09-18 00:52:55.640769 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-652709a4-002d-5e7f-9b0a-9f9e264992f4', 'data_vg': 'ceph-652709a4-002d-5e7f-9b0a-9f9e264992f4'}) 2025-09-18 00:52:55.640774 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-48f1b2b0-1ebe-571e-b515-4e988bd235b0', 'data_vg': 'ceph-48f1b2b0-1ebe-571e-b515-4e988bd235b0'}) 2025-09-18 00:52:55.640778 | orchestrator | 2025-09-18 00:52:55.640783 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-18 00:52:55.640788 | orchestrator | Thursday 18 September 2025 00:49:32 +0000 (0:00:43.223) 0:07:43.175 **** 2025-09-18 00:52:55.640796 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.640801 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.640806 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.640811 | orchestrator | 2025-09-18 00:52:55.640816 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-18 00:52:55.640820 | orchestrator | Thursday 18 September 2025 00:49:33 +0000 (0:00:00.556) 0:07:43.731 **** 2025-09-18 00:52:55.640825 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.640830 | orchestrator | 2025-09-18 00:52:55.640835 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-18 00:52:55.640842 | orchestrator | Thursday 18 September 2025 00:49:33 +0000 (0:00:00.613) 0:07:44.345 **** 2025-09-18 00:52:55.640847 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640852 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640856 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640861 | orchestrator | 2025-09-18 00:52:55.640865 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-18 00:52:55.640870 | orchestrator | Thursday 18 September 2025 00:49:34 +0000 (0:00:00.736) 0:07:45.081 **** 2025-09-18 00:52:55.640874 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.640879 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.640883 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.640888 | orchestrator | 2025-09-18 00:52:55.640892 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-18 00:52:55.640897 | orchestrator | Thursday 18 September 2025 00:49:37 +0000 (0:00:03.016) 0:07:48.098 **** 2025-09-18 00:52:55.640901 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.640906 | orchestrator | 2025-09-18 00:52:55.640911 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-18 00:52:55.640915 | orchestrator | Thursday 18 September 2025 00:49:37 +0000 (0:00:00.548) 0:07:48.646 **** 2025-09-18 00:52:55.640920 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.640924 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.640929 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.640933 | orchestrator | 2025-09-18 00:52:55.640938 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-18 00:52:55.640942 | orchestrator | Thursday 18 September 2025 00:49:39 +0000 (0:00:01.257) 0:07:49.903 **** 2025-09-18 00:52:55.640947 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.640951 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.640956 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.640960 | orchestrator | 2025-09-18 00:52:55.640965 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-18 00:52:55.640969 | orchestrator | Thursday 18 September 2025 00:49:40 +0000 (0:00:01.525) 0:07:51.428 **** 2025-09-18 00:52:55.640974 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.640978 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.640983 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.640987 | orchestrator | 2025-09-18 00:52:55.640992 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-18 00:52:55.640997 | orchestrator | Thursday 18 September 2025 00:49:42 +0000 (0:00:01.816) 0:07:53.245 **** 2025-09-18 00:52:55.641001 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641006 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641010 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641015 | orchestrator | 2025-09-18 00:52:55.641019 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-18 00:52:55.641024 | orchestrator | Thursday 18 September 2025 00:49:42 +0000 (0:00:00.327) 0:07:53.573 **** 2025-09-18 00:52:55.641028 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641033 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641037 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641045 | orchestrator | 2025-09-18 00:52:55.641050 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-18 00:52:55.641054 | orchestrator | Thursday 18 September 2025 00:49:43 +0000 (0:00:00.319) 0:07:53.892 **** 2025-09-18 00:52:55.641059 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-09-18 00:52:55.641063 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-18 00:52:55.641068 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-18 00:52:55.641072 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-18 00:52:55.641077 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 00:52:55.641081 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-18 00:52:55.641086 | orchestrator | 2025-09-18 00:52:55.641090 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-18 00:52:55.641095 | orchestrator | Thursday 18 September 2025 00:49:44 +0000 (0:00:01.349) 0:07:55.242 **** 2025-09-18 00:52:55.641099 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-18 00:52:55.641103 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-18 00:52:55.641108 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-18 00:52:55.641112 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-18 00:52:55.641117 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-18 00:52:55.641121 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-18 00:52:55.641126 | orchestrator | 2025-09-18 00:52:55.641133 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-18 00:52:55.641137 | orchestrator | Thursday 18 September 2025 00:49:46 +0000 (0:00:02.246) 0:07:57.488 **** 2025-09-18 00:52:55.641142 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-18 00:52:55.641146 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-18 00:52:55.641151 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-18 00:52:55.641155 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-18 00:52:55.641160 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-18 00:52:55.641165 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-18 00:52:55.641169 | orchestrator | 2025-09-18 00:52:55.641173 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-18 00:52:55.641178 | orchestrator | Thursday 18 September 2025 00:49:50 +0000 (0:00:03.523) 0:08:01.011 **** 2025-09-18 00:52:55.641183 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641187 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.641196 | orchestrator | 2025-09-18 00:52:55.641201 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-18 00:52:55.641205 | orchestrator | Thursday 18 September 2025 00:49:53 +0000 (0:00:02.829) 0:08:03.840 **** 2025-09-18 00:52:55.641210 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641214 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641219 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-18 00:52:55.641223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.641228 | orchestrator | 2025-09-18 00:52:55.641235 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-18 00:52:55.641239 | orchestrator | Thursday 18 September 2025 00:50:06 +0000 (0:00:12.935) 0:08:16.776 **** 2025-09-18 00:52:55.641244 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641248 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641253 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641257 | orchestrator | 2025-09-18 00:52:55.641262 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.641266 | orchestrator | Thursday 18 September 2025 00:50:06 +0000 (0:00:00.840) 0:08:17.616 **** 2025-09-18 00:52:55.641271 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641275 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641280 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641288 | orchestrator | 2025-09-18 00:52:55.641292 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-18 00:52:55.641306 | orchestrator | Thursday 18 September 2025 00:50:07 +0000 (0:00:00.453) 0:08:18.069 **** 2025-09-18 00:52:55.641311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.641315 | orchestrator | 2025-09-18 00:52:55.641320 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-18 00:52:55.641324 | orchestrator | Thursday 18 September 2025 00:50:07 +0000 (0:00:00.471) 0:08:18.541 **** 2025-09-18 00:52:55.641328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.641333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.641337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.641342 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641346 | orchestrator | 2025-09-18 00:52:55.641351 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-18 00:52:55.641355 | orchestrator | Thursday 18 September 2025 00:50:08 +0000 (0:00:00.347) 0:08:18.889 **** 2025-09-18 00:52:55.641360 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641364 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641369 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641373 | orchestrator | 2025-09-18 00:52:55.641377 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-18 00:52:55.641382 | orchestrator | Thursday 18 September 2025 00:50:08 +0000 (0:00:00.306) 0:08:19.195 **** 2025-09-18 00:52:55.641386 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641391 | orchestrator | 2025-09-18 00:52:55.641395 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-18 00:52:55.641400 | orchestrator | Thursday 18 September 2025 00:50:08 +0000 (0:00:00.203) 0:08:19.399 **** 2025-09-18 00:52:55.641404 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641409 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641413 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641418 | orchestrator | 2025-09-18 00:52:55.641422 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-18 00:52:55.641427 | orchestrator | Thursday 18 September 2025 00:50:09 +0000 (0:00:00.453) 0:08:19.853 **** 2025-09-18 00:52:55.641431 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641435 | orchestrator | 2025-09-18 00:52:55.641440 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-18 00:52:55.641444 | orchestrator | Thursday 18 September 2025 00:50:09 +0000 (0:00:00.204) 0:08:20.057 **** 2025-09-18 00:52:55.641449 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641453 | orchestrator | 2025-09-18 00:52:55.641458 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-18 00:52:55.641462 | orchestrator | Thursday 18 September 2025 00:50:09 +0000 (0:00:00.196) 0:08:20.254 **** 2025-09-18 00:52:55.641467 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641471 | orchestrator | 2025-09-18 00:52:55.641476 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-18 00:52:55.641480 | orchestrator | Thursday 18 September 2025 00:50:09 +0000 (0:00:00.121) 0:08:20.375 **** 2025-09-18 00:52:55.641485 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641489 | orchestrator | 2025-09-18 00:52:55.641494 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-18 00:52:55.641498 | orchestrator | Thursday 18 September 2025 00:50:09 +0000 (0:00:00.198) 0:08:20.573 **** 2025-09-18 00:52:55.641505 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641510 | orchestrator | 2025-09-18 00:52:55.641514 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-18 00:52:55.641519 | orchestrator | Thursday 18 September 2025 00:50:10 +0000 (0:00:00.192) 0:08:20.766 **** 2025-09-18 00:52:55.641526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.641531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.641535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.641540 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641544 | orchestrator | 2025-09-18 00:52:55.641549 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-18 00:52:55.641553 | orchestrator | Thursday 18 September 2025 00:50:10 +0000 (0:00:00.362) 0:08:21.128 **** 2025-09-18 00:52:55.641558 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641562 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641567 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641571 | orchestrator | 2025-09-18 00:52:55.641576 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-18 00:52:55.641581 | orchestrator | Thursday 18 September 2025 00:50:10 +0000 (0:00:00.259) 0:08:21.388 **** 2025-09-18 00:52:55.641585 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641590 | orchestrator | 2025-09-18 00:52:55.641594 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-18 00:52:55.641599 | orchestrator | Thursday 18 September 2025 00:50:11 +0000 (0:00:00.542) 0:08:21.930 **** 2025-09-18 00:52:55.641603 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641608 | orchestrator | 2025-09-18 00:52:55.641615 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-18 00:52:55.641619 | orchestrator | 2025-09-18 00:52:55.641624 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.641629 | orchestrator | Thursday 18 September 2025 00:50:11 +0000 (0:00:00.575) 0:08:22.506 **** 2025-09-18 00:52:55.641633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.641638 | orchestrator | 2025-09-18 00:52:55.641643 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.641647 | orchestrator | Thursday 18 September 2025 00:50:12 +0000 (0:00:00.990) 0:08:23.497 **** 2025-09-18 00:52:55.641652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.641656 | orchestrator | 2025-09-18 00:52:55.641661 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.641665 | orchestrator | Thursday 18 September 2025 00:50:14 +0000 (0:00:01.212) 0:08:24.709 **** 2025-09-18 00:52:55.641670 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641674 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641679 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641683 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.641688 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.641692 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.641697 | orchestrator | 2025-09-18 00:52:55.641701 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.641706 | orchestrator | Thursday 18 September 2025 00:50:15 +0000 (0:00:01.287) 0:08:25.996 **** 2025-09-18 00:52:55.641710 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.641715 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.641719 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.641724 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.641728 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.641733 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.641737 | orchestrator | 2025-09-18 00:52:55.641742 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.641746 | orchestrator | Thursday 18 September 2025 00:50:16 +0000 (0:00:00.737) 0:08:26.734 **** 2025-09-18 00:52:55.641751 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.641755 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.641763 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.641767 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.641772 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.641776 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.641781 | orchestrator | 2025-09-18 00:52:55.641785 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.641790 | orchestrator | Thursday 18 September 2025 00:50:17 +0000 (0:00:01.028) 0:08:27.762 **** 2025-09-18 00:52:55.641794 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.641799 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.641803 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.641808 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.641812 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.641817 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.641821 | orchestrator | 2025-09-18 00:52:55.641826 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.641830 | orchestrator | Thursday 18 September 2025 00:50:17 +0000 (0:00:00.762) 0:08:28.525 **** 2025-09-18 00:52:55.641835 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641839 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641844 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641848 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.641853 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.641857 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.641862 | orchestrator | 2025-09-18 00:52:55.641866 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.641871 | orchestrator | Thursday 18 September 2025 00:50:18 +0000 (0:00:01.035) 0:08:29.560 **** 2025-09-18 00:52:55.641875 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641880 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641884 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.641894 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.641900 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.641905 | orchestrator | 2025-09-18 00:52:55.641909 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.641914 | orchestrator | Thursday 18 September 2025 00:50:19 +0000 (0:00:00.973) 0:08:30.534 **** 2025-09-18 00:52:55.641918 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.641923 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.641927 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.641932 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.641936 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.641941 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.641945 | orchestrator | 2025-09-18 00:52:55.641950 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.641954 | orchestrator | Thursday 18 September 2025 00:50:20 +0000 (0:00:00.616) 0:08:31.151 **** 2025-09-18 00:52:55.641959 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.641963 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.641968 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.641972 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.641977 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.641982 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.641986 | orchestrator | 2025-09-18 00:52:55.641990 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.641995 | orchestrator | Thursday 18 September 2025 00:50:21 +0000 (0:00:01.397) 0:08:32.548 **** 2025-09-18 00:52:55.641999 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642004 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642008 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642013 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642032 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642037 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642045 | orchestrator | 2025-09-18 00:52:55.642052 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.642057 | orchestrator | Thursday 18 September 2025 00:50:22 +0000 (0:00:01.119) 0:08:33.668 **** 2025-09-18 00:52:55.642061 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642066 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642070 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642075 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642079 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642084 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642088 | orchestrator | 2025-09-18 00:52:55.642093 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.642097 | orchestrator | Thursday 18 September 2025 00:50:23 +0000 (0:00:00.860) 0:08:34.528 **** 2025-09-18 00:52:55.642102 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642106 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642111 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642115 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642120 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642124 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642129 | orchestrator | 2025-09-18 00:52:55.642133 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.642138 | orchestrator | Thursday 18 September 2025 00:50:24 +0000 (0:00:00.635) 0:08:35.164 **** 2025-09-18 00:52:55.642143 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642147 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642151 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642160 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642165 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642169 | orchestrator | 2025-09-18 00:52:55.642174 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.642179 | orchestrator | Thursday 18 September 2025 00:50:25 +0000 (0:00:00.861) 0:08:36.026 **** 2025-09-18 00:52:55.642183 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642188 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642192 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642197 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642201 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642206 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642210 | orchestrator | 2025-09-18 00:52:55.642215 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.642219 | orchestrator | Thursday 18 September 2025 00:50:25 +0000 (0:00:00.623) 0:08:36.649 **** 2025-09-18 00:52:55.642224 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642228 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642233 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642237 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642242 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642246 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642251 | orchestrator | 2025-09-18 00:52:55.642255 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.642260 | orchestrator | Thursday 18 September 2025 00:50:26 +0000 (0:00:00.873) 0:08:37.523 **** 2025-09-18 00:52:55.642264 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642269 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642273 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642278 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642282 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642287 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642291 | orchestrator | 2025-09-18 00:52:55.642306 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.642311 | orchestrator | Thursday 18 September 2025 00:50:27 +0000 (0:00:00.621) 0:08:38.144 **** 2025-09-18 00:52:55.642319 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642323 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642328 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642332 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:52:55.642337 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:52:55.642341 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:52:55.642345 | orchestrator | 2025-09-18 00:52:55.642350 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.642355 | orchestrator | Thursday 18 September 2025 00:50:28 +0000 (0:00:00.899) 0:08:39.043 **** 2025-09-18 00:52:55.642359 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642364 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642368 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642375 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642380 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642384 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642389 | orchestrator | 2025-09-18 00:52:55.642393 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.642398 | orchestrator | Thursday 18 September 2025 00:50:28 +0000 (0:00:00.616) 0:08:39.660 **** 2025-09-18 00:52:55.642402 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642407 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642411 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642416 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642420 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642425 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642429 | orchestrator | 2025-09-18 00:52:55.642434 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.642438 | orchestrator | Thursday 18 September 2025 00:50:30 +0000 (0:00:01.091) 0:08:40.751 **** 2025-09-18 00:52:55.642443 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642447 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642452 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642456 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642461 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642465 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642470 | orchestrator | 2025-09-18 00:52:55.642474 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-18 00:52:55.642479 | orchestrator | Thursday 18 September 2025 00:50:31 +0000 (0:00:01.272) 0:08:42.024 **** 2025-09-18 00:52:55.642483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.642488 | orchestrator | 2025-09-18 00:52:55.642492 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-18 00:52:55.642509 | orchestrator | Thursday 18 September 2025 00:50:35 +0000 (0:00:04.109) 0:08:46.133 **** 2025-09-18 00:52:55.642514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.642518 | orchestrator | 2025-09-18 00:52:55.642523 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-18 00:52:55.642527 | orchestrator | Thursday 18 September 2025 00:50:37 +0000 (0:00:02.053) 0:08:48.187 **** 2025-09-18 00:52:55.642532 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.642536 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.642541 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.642545 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642550 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.642554 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.642559 | orchestrator | 2025-09-18 00:52:55.642563 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-18 00:52:55.642568 | orchestrator | Thursday 18 September 2025 00:50:39 +0000 (0:00:01.579) 0:08:49.767 **** 2025-09-18 00:52:55.642572 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.642577 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.642581 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.642586 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.642593 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.642598 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.642602 | orchestrator | 2025-09-18 00:52:55.642607 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-18 00:52:55.642611 | orchestrator | Thursday 18 September 2025 00:50:40 +0000 (0:00:01.290) 0:08:51.058 **** 2025-09-18 00:52:55.642616 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.642621 | orchestrator | 2025-09-18 00:52:55.642625 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-18 00:52:55.642630 | orchestrator | Thursday 18 September 2025 00:50:41 +0000 (0:00:01.215) 0:08:52.273 **** 2025-09-18 00:52:55.642634 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.642639 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.642643 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.642648 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.642652 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.642657 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.642661 | orchestrator | 2025-09-18 00:52:55.642666 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-18 00:52:55.642670 | orchestrator | Thursday 18 September 2025 00:50:43 +0000 (0:00:01.562) 0:08:53.836 **** 2025-09-18 00:52:55.642675 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.642679 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.642684 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.642688 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.642693 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.642697 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.642702 | orchestrator | 2025-09-18 00:52:55.642706 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-18 00:52:55.642711 | orchestrator | Thursday 18 September 2025 00:50:46 +0000 (0:00:03.733) 0:08:57.569 **** 2025-09-18 00:52:55.642716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:52:55.642720 | orchestrator | 2025-09-18 00:52:55.642725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-18 00:52:55.642729 | orchestrator | Thursday 18 September 2025 00:50:48 +0000 (0:00:01.354) 0:08:58.924 **** 2025-09-18 00:52:55.642734 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642738 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642743 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642747 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642752 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642756 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642761 | orchestrator | 2025-09-18 00:52:55.642765 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-18 00:52:55.642770 | orchestrator | Thursday 18 September 2025 00:50:48 +0000 (0:00:00.693) 0:08:59.618 **** 2025-09-18 00:52:55.642774 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.642779 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.642783 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.642790 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:52:55.642795 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:52:55.642799 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:52:55.642804 | orchestrator | 2025-09-18 00:52:55.642808 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-18 00:52:55.642813 | orchestrator | Thursday 18 September 2025 00:50:51 +0000 (0:00:02.721) 0:09:02.339 **** 2025-09-18 00:52:55.642817 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642822 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642826 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642831 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:52:55.642840 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:52:55.642844 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:52:55.642849 | orchestrator | 2025-09-18 00:52:55.642853 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-18 00:52:55.642858 | orchestrator | 2025-09-18 00:52:55.642863 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.642867 | orchestrator | Thursday 18 September 2025 00:50:52 +0000 (0:00:00.891) 0:09:03.230 **** 2025-09-18 00:52:55.642872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.642876 | orchestrator | 2025-09-18 00:52:55.642881 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.642885 | orchestrator | Thursday 18 September 2025 00:50:53 +0000 (0:00:00.895) 0:09:04.126 **** 2025-09-18 00:52:55.642890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.642894 | orchestrator | 2025-09-18 00:52:55.642901 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.642906 | orchestrator | Thursday 18 September 2025 00:50:53 +0000 (0:00:00.545) 0:09:04.671 **** 2025-09-18 00:52:55.642910 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.642915 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.642920 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.642924 | orchestrator | 2025-09-18 00:52:55.642929 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.642933 | orchestrator | Thursday 18 September 2025 00:50:54 +0000 (0:00:00.650) 0:09:05.321 **** 2025-09-18 00:52:55.642938 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642942 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642947 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642951 | orchestrator | 2025-09-18 00:52:55.642956 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.642960 | orchestrator | Thursday 18 September 2025 00:50:55 +0000 (0:00:00.753) 0:09:06.074 **** 2025-09-18 00:52:55.642965 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642969 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.642974 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.642978 | orchestrator | 2025-09-18 00:52:55.642983 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.642988 | orchestrator | Thursday 18 September 2025 00:50:56 +0000 (0:00:00.764) 0:09:06.839 **** 2025-09-18 00:52:55.642992 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.642997 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643001 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643005 | orchestrator | 2025-09-18 00:52:55.643010 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.643015 | orchestrator | Thursday 18 September 2025 00:50:56 +0000 (0:00:00.674) 0:09:07.513 **** 2025-09-18 00:52:55.643019 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643024 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643028 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643033 | orchestrator | 2025-09-18 00:52:55.643037 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.643042 | orchestrator | Thursday 18 September 2025 00:50:57 +0000 (0:00:00.485) 0:09:07.998 **** 2025-09-18 00:52:55.643046 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643051 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643055 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643060 | orchestrator | 2025-09-18 00:52:55.643064 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.643069 | orchestrator | Thursday 18 September 2025 00:50:57 +0000 (0:00:00.298) 0:09:08.297 **** 2025-09-18 00:52:55.643073 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643078 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643085 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643090 | orchestrator | 2025-09-18 00:52:55.643095 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.643099 | orchestrator | Thursday 18 September 2025 00:50:57 +0000 (0:00:00.302) 0:09:08.599 **** 2025-09-18 00:52:55.643103 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643108 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643113 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643117 | orchestrator | 2025-09-18 00:52:55.643121 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.643126 | orchestrator | Thursday 18 September 2025 00:50:58 +0000 (0:00:00.709) 0:09:09.308 **** 2025-09-18 00:52:55.643131 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643135 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643140 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643144 | orchestrator | 2025-09-18 00:52:55.643149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.643153 | orchestrator | Thursday 18 September 2025 00:50:59 +0000 (0:00:00.950) 0:09:10.259 **** 2025-09-18 00:52:55.643158 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643162 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643167 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643171 | orchestrator | 2025-09-18 00:52:55.643176 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.643180 | orchestrator | Thursday 18 September 2025 00:50:59 +0000 (0:00:00.306) 0:09:10.565 **** 2025-09-18 00:52:55.643185 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643189 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643194 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643198 | orchestrator | 2025-09-18 00:52:55.643205 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.643210 | orchestrator | Thursday 18 September 2025 00:51:00 +0000 (0:00:00.286) 0:09:10.852 **** 2025-09-18 00:52:55.643214 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643219 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643223 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643228 | orchestrator | 2025-09-18 00:52:55.643232 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.643237 | orchestrator | Thursday 18 September 2025 00:51:00 +0000 (0:00:00.323) 0:09:11.175 **** 2025-09-18 00:52:55.643241 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643246 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643250 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643255 | orchestrator | 2025-09-18 00:52:55.643259 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.643264 | orchestrator | Thursday 18 September 2025 00:51:00 +0000 (0:00:00.511) 0:09:11.687 **** 2025-09-18 00:52:55.643268 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643273 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643277 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643282 | orchestrator | 2025-09-18 00:52:55.643286 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.643291 | orchestrator | Thursday 18 September 2025 00:51:01 +0000 (0:00:00.284) 0:09:11.972 **** 2025-09-18 00:52:55.643305 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643310 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643314 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643319 | orchestrator | 2025-09-18 00:52:55.643323 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.643330 | orchestrator | Thursday 18 September 2025 00:51:01 +0000 (0:00:00.309) 0:09:12.281 **** 2025-09-18 00:52:55.643335 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643339 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643344 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643348 | orchestrator | 2025-09-18 00:52:55.643356 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.643361 | orchestrator | Thursday 18 September 2025 00:51:01 +0000 (0:00:00.269) 0:09:12.551 **** 2025-09-18 00:52:55.643365 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643370 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643374 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643378 | orchestrator | 2025-09-18 00:52:55.643383 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.643387 | orchestrator | Thursday 18 September 2025 00:51:02 +0000 (0:00:00.499) 0:09:13.050 **** 2025-09-18 00:52:55.643392 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643396 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643401 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643406 | orchestrator | 2025-09-18 00:52:55.643410 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.643415 | orchestrator | Thursday 18 September 2025 00:51:02 +0000 (0:00:00.407) 0:09:13.457 **** 2025-09-18 00:52:55.643419 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643424 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643428 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643433 | orchestrator | 2025-09-18 00:52:55.643437 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-18 00:52:55.643442 | orchestrator | Thursday 18 September 2025 00:51:03 +0000 (0:00:00.472) 0:09:13.930 **** 2025-09-18 00:52:55.643447 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643451 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643456 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-18 00:52:55.643460 | orchestrator | 2025-09-18 00:52:55.643465 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-18 00:52:55.643469 | orchestrator | Thursday 18 September 2025 00:51:03 +0000 (0:00:00.608) 0:09:14.539 **** 2025-09-18 00:52:55.643474 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.643478 | orchestrator | 2025-09-18 00:52:55.643483 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-18 00:52:55.643487 | orchestrator | Thursday 18 September 2025 00:51:05 +0000 (0:00:02.142) 0:09:16.681 **** 2025-09-18 00:52:55.643493 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-18 00:52:55.643499 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643504 | orchestrator | 2025-09-18 00:52:55.643508 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-18 00:52:55.643513 | orchestrator | Thursday 18 September 2025 00:51:06 +0000 (0:00:00.178) 0:09:16.859 **** 2025-09-18 00:52:55.643518 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:52:55.643527 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:52:55.643532 | orchestrator | 2025-09-18 00:52:55.643537 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-18 00:52:55.643541 | orchestrator | Thursday 18 September 2025 00:51:14 +0000 (0:00:08.324) 0:09:25.184 **** 2025-09-18 00:52:55.643546 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 00:52:55.643550 | orchestrator | 2025-09-18 00:52:55.643557 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-18 00:52:55.643562 | orchestrator | Thursday 18 September 2025 00:51:18 +0000 (0:00:04.085) 0:09:29.269 **** 2025-09-18 00:52:55.643569 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.643574 | orchestrator | 2025-09-18 00:52:55.643578 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-18 00:52:55.643583 | orchestrator | Thursday 18 September 2025 00:51:19 +0000 (0:00:00.849) 0:09:30.118 **** 2025-09-18 00:52:55.643587 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 00:52:55.643592 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 00:52:55.643597 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 00:52:55.643601 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-18 00:52:55.643606 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-18 00:52:55.643610 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-18 00:52:55.643615 | orchestrator | 2025-09-18 00:52:55.643619 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-18 00:52:55.643624 | orchestrator | Thursday 18 September 2025 00:51:20 +0000 (0:00:01.079) 0:09:31.198 **** 2025-09-18 00:52:55.643628 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.643635 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.643640 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.643644 | orchestrator | 2025-09-18 00:52:55.643649 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-18 00:52:55.643653 | orchestrator | Thursday 18 September 2025 00:51:22 +0000 (0:00:02.135) 0:09:33.333 **** 2025-09-18 00:52:55.643658 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 00:52:55.643663 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 00:52:55.643667 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 00:52:55.643672 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643676 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.643681 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643685 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 00:52:55.643690 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 00:52:55.643694 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643699 | orchestrator | 2025-09-18 00:52:55.643703 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-18 00:52:55.643708 | orchestrator | Thursday 18 September 2025 00:51:23 +0000 (0:00:01.179) 0:09:34.512 **** 2025-09-18 00:52:55.643712 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643717 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643721 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643726 | orchestrator | 2025-09-18 00:52:55.643730 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-18 00:52:55.643735 | orchestrator | Thursday 18 September 2025 00:51:26 +0000 (0:00:02.643) 0:09:37.156 **** 2025-09-18 00:52:55.643739 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.643744 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.643748 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.643753 | orchestrator | 2025-09-18 00:52:55.643757 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-18 00:52:55.643762 | orchestrator | Thursday 18 September 2025 00:51:27 +0000 (0:00:00.680) 0:09:37.836 **** 2025-09-18 00:52:55.643766 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.643771 | orchestrator | 2025-09-18 00:52:55.643775 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-18 00:52:55.643780 | orchestrator | Thursday 18 September 2025 00:51:27 +0000 (0:00:00.520) 0:09:38.356 **** 2025-09-18 00:52:55.643787 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.643792 | orchestrator | 2025-09-18 00:52:55.643796 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-18 00:52:55.643801 | orchestrator | Thursday 18 September 2025 00:51:28 +0000 (0:00:00.759) 0:09:39.116 **** 2025-09-18 00:52:55.643805 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643810 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643814 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643819 | orchestrator | 2025-09-18 00:52:55.643823 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-18 00:52:55.643828 | orchestrator | Thursday 18 September 2025 00:51:29 +0000 (0:00:01.296) 0:09:40.413 **** 2025-09-18 00:52:55.643832 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643837 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643842 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643846 | orchestrator | 2025-09-18 00:52:55.643851 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-18 00:52:55.643855 | orchestrator | Thursday 18 September 2025 00:51:30 +0000 (0:00:01.159) 0:09:41.573 **** 2025-09-18 00:52:55.643860 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643864 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643869 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643873 | orchestrator | 2025-09-18 00:52:55.643878 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-18 00:52:55.643882 | orchestrator | Thursday 18 September 2025 00:51:32 +0000 (0:00:01.748) 0:09:43.322 **** 2025-09-18 00:52:55.643887 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643891 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643896 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643900 | orchestrator | 2025-09-18 00:52:55.643907 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-18 00:52:55.643911 | orchestrator | Thursday 18 September 2025 00:51:34 +0000 (0:00:02.255) 0:09:45.577 **** 2025-09-18 00:52:55.643916 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643921 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643925 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.643930 | orchestrator | 2025-09-18 00:52:55.643934 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.643939 | orchestrator | Thursday 18 September 2025 00:51:36 +0000 (0:00:01.221) 0:09:46.799 **** 2025-09-18 00:52:55.643943 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.643948 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.643952 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.643957 | orchestrator | 2025-09-18 00:52:55.643961 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-18 00:52:55.643966 | orchestrator | Thursday 18 September 2025 00:51:37 +0000 (0:00:01.046) 0:09:47.845 **** 2025-09-18 00:52:55.643970 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.643975 | orchestrator | 2025-09-18 00:52:55.643979 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-18 00:52:55.643984 | orchestrator | Thursday 18 September 2025 00:51:37 +0000 (0:00:00.523) 0:09:48.369 **** 2025-09-18 00:52:55.643988 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.643993 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.643997 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644002 | orchestrator | 2025-09-18 00:52:55.644006 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-18 00:52:55.644013 | orchestrator | Thursday 18 September 2025 00:51:37 +0000 (0:00:00.325) 0:09:48.694 **** 2025-09-18 00:52:55.644018 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.644022 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.644030 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.644034 | orchestrator | 2025-09-18 00:52:55.644039 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-18 00:52:55.644043 | orchestrator | Thursday 18 September 2025 00:51:39 +0000 (0:00:01.591) 0:09:50.286 **** 2025-09-18 00:52:55.644048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.644052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.644057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.644062 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644066 | orchestrator | 2025-09-18 00:52:55.644071 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-18 00:52:55.644075 | orchestrator | Thursday 18 September 2025 00:51:40 +0000 (0:00:00.628) 0:09:50.915 **** 2025-09-18 00:52:55.644080 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644084 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644089 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644093 | orchestrator | 2025-09-18 00:52:55.644098 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-18 00:52:55.644102 | orchestrator | 2025-09-18 00:52:55.644107 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 00:52:55.644111 | orchestrator | Thursday 18 September 2025 00:51:40 +0000 (0:00:00.558) 0:09:51.474 **** 2025-09-18 00:52:55.644116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.644120 | orchestrator | 2025-09-18 00:52:55.644125 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 00:52:55.644129 | orchestrator | Thursday 18 September 2025 00:51:41 +0000 (0:00:00.723) 0:09:52.197 **** 2025-09-18 00:52:55.644134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.644138 | orchestrator | 2025-09-18 00:52:55.644143 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 00:52:55.644148 | orchestrator | Thursday 18 September 2025 00:51:42 +0000 (0:00:00.533) 0:09:52.730 **** 2025-09-18 00:52:55.644152 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644157 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644161 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644165 | orchestrator | 2025-09-18 00:52:55.644170 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 00:52:55.644175 | orchestrator | Thursday 18 September 2025 00:51:42 +0000 (0:00:00.519) 0:09:53.250 **** 2025-09-18 00:52:55.644179 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644184 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644188 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644193 | orchestrator | 2025-09-18 00:52:55.644197 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 00:52:55.644202 | orchestrator | Thursday 18 September 2025 00:51:43 +0000 (0:00:00.716) 0:09:53.967 **** 2025-09-18 00:52:55.644206 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644211 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644215 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644220 | orchestrator | 2025-09-18 00:52:55.644224 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 00:52:55.644229 | orchestrator | Thursday 18 September 2025 00:51:43 +0000 (0:00:00.734) 0:09:54.701 **** 2025-09-18 00:52:55.644233 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644238 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644242 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644247 | orchestrator | 2025-09-18 00:52:55.644251 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 00:52:55.644256 | orchestrator | Thursday 18 September 2025 00:51:44 +0000 (0:00:00.732) 0:09:55.433 **** 2025-09-18 00:52:55.644260 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644268 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644272 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644277 | orchestrator | 2025-09-18 00:52:55.644281 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 00:52:55.644289 | orchestrator | Thursday 18 September 2025 00:51:45 +0000 (0:00:00.680) 0:09:56.114 **** 2025-09-18 00:52:55.644293 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644321 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644325 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644330 | orchestrator | 2025-09-18 00:52:55.644334 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 00:52:55.644339 | orchestrator | Thursday 18 September 2025 00:51:45 +0000 (0:00:00.343) 0:09:56.458 **** 2025-09-18 00:52:55.644343 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644348 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644353 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644357 | orchestrator | 2025-09-18 00:52:55.644362 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 00:52:55.644366 | orchestrator | Thursday 18 September 2025 00:51:46 +0000 (0:00:00.315) 0:09:56.773 **** 2025-09-18 00:52:55.644371 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644375 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644380 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644384 | orchestrator | 2025-09-18 00:52:55.644389 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 00:52:55.644394 | orchestrator | Thursday 18 September 2025 00:51:46 +0000 (0:00:00.731) 0:09:57.505 **** 2025-09-18 00:52:55.644398 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644403 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644407 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644412 | orchestrator | 2025-09-18 00:52:55.644416 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 00:52:55.644421 | orchestrator | Thursday 18 September 2025 00:51:47 +0000 (0:00:01.024) 0:09:58.529 **** 2025-09-18 00:52:55.644425 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644432 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644437 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644442 | orchestrator | 2025-09-18 00:52:55.644446 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 00:52:55.644451 | orchestrator | Thursday 18 September 2025 00:51:48 +0000 (0:00:00.319) 0:09:58.849 **** 2025-09-18 00:52:55.644455 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644460 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644464 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644469 | orchestrator | 2025-09-18 00:52:55.644473 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 00:52:55.644478 | orchestrator | Thursday 18 September 2025 00:51:48 +0000 (0:00:00.311) 0:09:59.160 **** 2025-09-18 00:52:55.644482 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644487 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644491 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644496 | orchestrator | 2025-09-18 00:52:55.644500 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 00:52:55.644505 | orchestrator | Thursday 18 September 2025 00:51:48 +0000 (0:00:00.341) 0:09:59.502 **** 2025-09-18 00:52:55.644510 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644514 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644519 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644523 | orchestrator | 2025-09-18 00:52:55.644528 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 00:52:55.644532 | orchestrator | Thursday 18 September 2025 00:51:49 +0000 (0:00:00.611) 0:10:00.114 **** 2025-09-18 00:52:55.644537 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644541 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644546 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644555 | orchestrator | 2025-09-18 00:52:55.644560 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 00:52:55.644565 | orchestrator | Thursday 18 September 2025 00:51:49 +0000 (0:00:00.343) 0:10:00.458 **** 2025-09-18 00:52:55.644569 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644574 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644578 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644583 | orchestrator | 2025-09-18 00:52:55.644587 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 00:52:55.644592 | orchestrator | Thursday 18 September 2025 00:51:50 +0000 (0:00:00.328) 0:10:00.786 **** 2025-09-18 00:52:55.644596 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644601 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644606 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644610 | orchestrator | 2025-09-18 00:52:55.644615 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 00:52:55.644619 | orchestrator | Thursday 18 September 2025 00:51:50 +0000 (0:00:00.316) 0:10:01.102 **** 2025-09-18 00:52:55.644624 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644628 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644632 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644636 | orchestrator | 2025-09-18 00:52:55.644640 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 00:52:55.644644 | orchestrator | Thursday 18 September 2025 00:51:50 +0000 (0:00:00.567) 0:10:01.670 **** 2025-09-18 00:52:55.644648 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644652 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644656 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644661 | orchestrator | 2025-09-18 00:52:55.644665 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 00:52:55.644669 | orchestrator | Thursday 18 September 2025 00:51:51 +0000 (0:00:00.333) 0:10:02.003 **** 2025-09-18 00:52:55.644673 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.644677 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.644681 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.644685 | orchestrator | 2025-09-18 00:52:55.644689 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-18 00:52:55.644693 | orchestrator | Thursday 18 September 2025 00:51:51 +0000 (0:00:00.558) 0:10:02.562 **** 2025-09-18 00:52:55.644697 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.644701 | orchestrator | 2025-09-18 00:52:55.644705 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-18 00:52:55.644710 | orchestrator | Thursday 18 September 2025 00:51:52 +0000 (0:00:00.768) 0:10:03.330 **** 2025-09-18 00:52:55.644716 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644720 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.644724 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.644728 | orchestrator | 2025-09-18 00:52:55.644733 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-18 00:52:55.644737 | orchestrator | Thursday 18 September 2025 00:51:54 +0000 (0:00:02.144) 0:10:05.474 **** 2025-09-18 00:52:55.644741 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 00:52:55.644745 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 00:52:55.644749 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.644753 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 00:52:55.644757 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 00:52:55.644761 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.644765 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 00:52:55.644770 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 00:52:55.644774 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.644781 | orchestrator | 2025-09-18 00:52:55.644785 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-18 00:52:55.644790 | orchestrator | Thursday 18 September 2025 00:51:56 +0000 (0:00:01.262) 0:10:06.737 **** 2025-09-18 00:52:55.644794 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.644798 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.644802 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.644806 | orchestrator | 2025-09-18 00:52:55.644810 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-18 00:52:55.644816 | orchestrator | Thursday 18 September 2025 00:51:56 +0000 (0:00:00.308) 0:10:07.045 **** 2025-09-18 00:52:55.644821 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.644825 | orchestrator | 2025-09-18 00:52:55.644829 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-18 00:52:55.644833 | orchestrator | Thursday 18 September 2025 00:51:57 +0000 (0:00:00.817) 0:10:07.863 **** 2025-09-18 00:52:55.644837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.644841 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.644845 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.644850 | orchestrator | 2025-09-18 00:52:55.644854 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-18 00:52:55.644858 | orchestrator | Thursday 18 September 2025 00:51:57 +0000 (0:00:00.828) 0:10:08.692 **** 2025-09-18 00:52:55.644862 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644866 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 00:52:55.644870 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644874 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 00:52:55.644878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644882 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 00:52:55.644886 | orchestrator | 2025-09-18 00:52:55.644890 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-18 00:52:55.644894 | orchestrator | Thursday 18 September 2025 00:52:02 +0000 (0:00:04.574) 0:10:13.266 **** 2025-09-18 00:52:55.644899 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644903 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.644907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644911 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.644915 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:52:55.644919 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:52:55.644923 | orchestrator | 2025-09-18 00:52:55.644927 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-18 00:52:55.644931 | orchestrator | Thursday 18 September 2025 00:52:05 +0000 (0:00:03.002) 0:10:16.269 **** 2025-09-18 00:52:55.644935 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 00:52:55.644939 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.644948 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 00:52:55.644952 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.644956 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 00:52:55.644960 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.644964 | orchestrator | 2025-09-18 00:52:55.644969 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-18 00:52:55.644973 | orchestrator | Thursday 18 September 2025 00:52:06 +0000 (0:00:01.319) 0:10:17.588 **** 2025-09-18 00:52:55.644979 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-18 00:52:55.644983 | orchestrator | 2025-09-18 00:52:55.644987 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-18 00:52:55.644991 | orchestrator | Thursday 18 September 2025 00:52:07 +0000 (0:00:00.257) 0:10:17.845 **** 2025-09-18 00:52:55.644995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645016 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645020 | orchestrator | 2025-09-18 00:52:55.645025 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-18 00:52:55.645029 | orchestrator | Thursday 18 September 2025 00:52:07 +0000 (0:00:00.833) 0:10:18.679 **** 2025-09-18 00:52:55.645035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 00:52:55.645056 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645060 | orchestrator | 2025-09-18 00:52:55.645064 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-18 00:52:55.645068 | orchestrator | Thursday 18 September 2025 00:52:08 +0000 (0:00:00.634) 0:10:19.314 **** 2025-09-18 00:52:55.645072 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 00:52:55.645076 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 00:52:55.645080 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 00:52:55.645084 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 00:52:55.645089 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 00:52:55.645096 | orchestrator | 2025-09-18 00:52:55.645100 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-18 00:52:55.645104 | orchestrator | Thursday 18 September 2025 00:52:39 +0000 (0:00:30.820) 0:10:50.135 **** 2025-09-18 00:52:55.645108 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645112 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.645116 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.645120 | orchestrator | 2025-09-18 00:52:55.645124 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-18 00:52:55.645129 | orchestrator | Thursday 18 September 2025 00:52:39 +0000 (0:00:00.299) 0:10:50.434 **** 2025-09-18 00:52:55.645133 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645137 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.645141 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.645145 | orchestrator | 2025-09-18 00:52:55.645149 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-18 00:52:55.645153 | orchestrator | Thursday 18 September 2025 00:52:40 +0000 (0:00:00.600) 0:10:51.035 **** 2025-09-18 00:52:55.645157 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.645161 | orchestrator | 2025-09-18 00:52:55.645165 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-18 00:52:55.645169 | orchestrator | Thursday 18 September 2025 00:52:40 +0000 (0:00:00.573) 0:10:51.609 **** 2025-09-18 00:52:55.645174 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.645178 | orchestrator | 2025-09-18 00:52:55.645182 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-18 00:52:55.645186 | orchestrator | Thursday 18 September 2025 00:52:41 +0000 (0:00:00.801) 0:10:52.410 **** 2025-09-18 00:52:55.645192 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.645196 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.645200 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.645204 | orchestrator | 2025-09-18 00:52:55.645208 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-18 00:52:55.645212 | orchestrator | Thursday 18 September 2025 00:52:43 +0000 (0:00:01.345) 0:10:53.756 **** 2025-09-18 00:52:55.645216 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.645220 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.645225 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.645229 | orchestrator | 2025-09-18 00:52:55.645233 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-18 00:52:55.645237 | orchestrator | Thursday 18 September 2025 00:52:44 +0000 (0:00:01.177) 0:10:54.933 **** 2025-09-18 00:52:55.645241 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:52:55.645245 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:52:55.645249 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:52:55.645253 | orchestrator | 2025-09-18 00:52:55.645257 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-18 00:52:55.645261 | orchestrator | Thursday 18 September 2025 00:52:46 +0000 (0:00:01.891) 0:10:56.824 **** 2025-09-18 00:52:55.645265 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.645270 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.645276 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 00:52:55.645280 | orchestrator | 2025-09-18 00:52:55.645284 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 00:52:55.645289 | orchestrator | Thursday 18 September 2025 00:52:48 +0000 (0:00:02.870) 0:10:59.695 **** 2025-09-18 00:52:55.645305 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645310 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.645314 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.645318 | orchestrator | 2025-09-18 00:52:55.645322 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-18 00:52:55.645326 | orchestrator | Thursday 18 September 2025 00:52:49 +0000 (0:00:00.342) 0:11:00.037 **** 2025-09-18 00:52:55.645330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:52:55.645334 | orchestrator | 2025-09-18 00:52:55.645338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-18 00:52:55.645342 | orchestrator | Thursday 18 September 2025 00:52:50 +0000 (0:00:00.819) 0:11:00.857 **** 2025-09-18 00:52:55.645346 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.645350 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.645354 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.645358 | orchestrator | 2025-09-18 00:52:55.645362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-18 00:52:55.645366 | orchestrator | Thursday 18 September 2025 00:52:50 +0000 (0:00:00.322) 0:11:01.179 **** 2025-09-18 00:52:55.645370 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645374 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:52:55.645378 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:52:55.645383 | orchestrator | 2025-09-18 00:52:55.645387 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-18 00:52:55.645391 | orchestrator | Thursday 18 September 2025 00:52:50 +0000 (0:00:00.341) 0:11:01.521 **** 2025-09-18 00:52:55.645395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:52:55.645399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:52:55.645403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:52:55.645407 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:52:55.645411 | orchestrator | 2025-09-18 00:52:55.645415 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-18 00:52:55.645419 | orchestrator | Thursday 18 September 2025 00:52:51 +0000 (0:00:01.138) 0:11:02.660 **** 2025-09-18 00:52:55.645423 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:52:55.645427 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:52:55.645431 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:52:55.645435 | orchestrator | 2025-09-18 00:52:55.645439 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:52:55.645443 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-18 00:52:55.645447 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-18 00:52:55.645452 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-18 00:52:55.645456 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-18 00:52:55.645460 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-18 00:52:55.645464 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-18 00:52:55.645468 | orchestrator | 2025-09-18 00:52:55.645472 | orchestrator | 2025-09-18 00:52:55.645476 | orchestrator | 2025-09-18 00:52:55.645482 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:52:55.645487 | orchestrator | Thursday 18 September 2025 00:52:52 +0000 (0:00:00.261) 0:11:02.922 **** 2025-09-18 00:52:55.645494 | orchestrator | =============================================================================== 2025-09-18 00:52:55.645498 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.65s 2025-09-18 00:52:55.645502 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.22s 2025-09-18 00:52:55.645506 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.97s 2025-09-18 00:52:55.645510 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.82s 2025-09-18 00:52:55.645514 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.99s 2025-09-18 00:52:55.645518 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.89s 2025-09-18 00:52:55.645522 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.94s 2025-09-18 00:52:55.645526 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.65s 2025-09-18 00:52:55.645530 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.60s 2025-09-18 00:52:55.645534 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.32s 2025-09-18 00:52:55.645538 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.97s 2025-09-18 00:52:55.645544 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.47s 2025-09-18 00:52:55.645549 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.00s 2025-09-18 00:52:55.645553 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.72s 2025-09-18 00:52:55.645557 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.57s 2025-09-18 00:52:55.645561 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.11s 2025-09-18 00:52:55.645565 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.09s 2025-09-18 00:52:55.645569 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.90s 2025-09-18 00:52:55.645573 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.73s 2025-09-18 00:52:55.645577 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.70s 2025-09-18 00:52:55.645581 | orchestrator | 2025-09-18 00:52:55 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:55.645585 | orchestrator | 2025-09-18 00:52:55 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:55.645590 | orchestrator | 2025-09-18 00:52:55 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:52:55.645594 | orchestrator | 2025-09-18 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:52:58.664461 | orchestrator | 2025-09-18 00:52:58 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:52:58.666211 | orchestrator | 2025-09-18 00:52:58 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:52:58.668176 | orchestrator | 2025-09-18 00:52:58 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:52:58.668223 | orchestrator | 2025-09-18 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:01.710631 | orchestrator | 2025-09-18 00:53:01 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:01.711132 | orchestrator | 2025-09-18 00:53:01 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:01.713236 | orchestrator | 2025-09-18 00:53:01 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:01.713259 | orchestrator | 2025-09-18 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:04.749571 | orchestrator | 2025-09-18 00:53:04 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:04.749958 | orchestrator | 2025-09-18 00:53:04 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:04.751148 | orchestrator | 2025-09-18 00:53:04 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:04.751177 | orchestrator | 2025-09-18 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:07.796929 | orchestrator | 2025-09-18 00:53:07 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:07.798512 | orchestrator | 2025-09-18 00:53:07 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:07.800610 | orchestrator | 2025-09-18 00:53:07 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:07.800636 | orchestrator | 2025-09-18 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:10.847475 | orchestrator | 2025-09-18 00:53:10 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:10.848698 | orchestrator | 2025-09-18 00:53:10 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:10.850164 | orchestrator | 2025-09-18 00:53:10 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:10.850194 | orchestrator | 2025-09-18 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:13.896387 | orchestrator | 2025-09-18 00:53:13 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:13.897989 | orchestrator | 2025-09-18 00:53:13 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:13.900475 | orchestrator | 2025-09-18 00:53:13 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:13.901750 | orchestrator | 2025-09-18 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:16.945247 | orchestrator | 2025-09-18 00:53:16 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:16.946607 | orchestrator | 2025-09-18 00:53:16 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:16.948390 | orchestrator | 2025-09-18 00:53:16 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:16.948422 | orchestrator | 2025-09-18 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:19.999626 | orchestrator | 2025-09-18 00:53:19 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:20.002268 | orchestrator | 2025-09-18 00:53:20 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:20.005671 | orchestrator | 2025-09-18 00:53:20 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:20.006379 | orchestrator | 2025-09-18 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:23.061839 | orchestrator | 2025-09-18 00:53:23 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:23.063021 | orchestrator | 2025-09-18 00:53:23 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:23.064491 | orchestrator | 2025-09-18 00:53:23 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:23.064863 | orchestrator | 2025-09-18 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:26.106789 | orchestrator | 2025-09-18 00:53:26 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:26.108810 | orchestrator | 2025-09-18 00:53:26 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state STARTED 2025-09-18 00:53:26.108843 | orchestrator | 2025-09-18 00:53:26 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:26.108856 | orchestrator | 2025-09-18 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:29.157865 | orchestrator | 2025-09-18 00:53:29 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:29.159724 | orchestrator | 2025-09-18 00:53:29 | INFO  | Task 2a832a78-5850-4902-9bdb-8de9558b7cb0 is in state SUCCESS 2025-09-18 00:53:29.162648 | orchestrator | 2025-09-18 00:53:29.162690 | orchestrator | 2025-09-18 00:53:29.162852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:53:29.162867 | orchestrator | 2025-09-18 00:53:29.162878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:53:29.162890 | orchestrator | Thursday 18 September 2025 00:50:45 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-18 00:53:29.162902 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:29.162914 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:29.162925 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:29.162936 | orchestrator | 2025-09-18 00:53:29.162947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:53:29.162958 | orchestrator | Thursday 18 September 2025 00:50:46 +0000 (0:00:00.301) 0:00:00.562 **** 2025-09-18 00:53:29.162970 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-18 00:53:29.162981 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-18 00:53:29.162992 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-18 00:53:29.163002 | orchestrator | 2025-09-18 00:53:29.163013 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-18 00:53:29.163025 | orchestrator | 2025-09-18 00:53:29.163036 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 00:53:29.163047 | orchestrator | Thursday 18 September 2025 00:50:46 +0000 (0:00:00.436) 0:00:00.998 **** 2025-09-18 00:53:29.163058 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:29.163069 | orchestrator | 2025-09-18 00:53:29.163080 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-18 00:53:29.163090 | orchestrator | Thursday 18 September 2025 00:50:47 +0000 (0:00:00.530) 0:00:01.528 **** 2025-09-18 00:53:29.163101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:53:29.163112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:53:29.163127 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 00:53:29.163138 | orchestrator | 2025-09-18 00:53:29.163149 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-18 00:53:29.163160 | orchestrator | Thursday 18 September 2025 00:50:47 +0000 (0:00:00.694) 0:00:02.223 **** 2025-09-18 00:53:29.163191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163344 | orchestrator | 2025-09-18 00:53:29.163355 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 00:53:29.163366 | orchestrator | Thursday 18 September 2025 00:50:49 +0000 (0:00:01.861) 0:00:04.084 **** 2025-09-18 00:53:29.163377 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:29.163388 | orchestrator | 2025-09-18 00:53:29.163399 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-18 00:53:29.163409 | orchestrator | Thursday 18 September 2025 00:50:50 +0000 (0:00:00.564) 0:00:04.648 **** 2025-09-18 00:53:29.163431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.163482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.163534 | orchestrator | 2025-09-18 00:53:29.163547 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-18 00:53:29.163559 | orchestrator | Thursday 18 September 2025 00:50:52 +0000 (0:00:02.712) 0:00:07.361 **** 2025-09-18 00:53:29.163572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163646 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:29.163660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163696 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:29.163710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163751 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:29.163764 | orchestrator | 2025-09-18 00:53:29.163776 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-18 00:53:29.163789 | orchestrator | Thursday 18 September 2025 00:50:54 +0000 (0:00:01.209) 0:00:08.570 **** 2025-09-18 00:53:29.163801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163837 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:29.163850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163888 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:29.163899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 00:53:29.163919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 00:53:29.163931 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:29.163942 | orchestrator | 2025-09-18 00:53:29.163953 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-18 00:53:29.163964 | orchestrator | Thursday 18 September 2025 00:50:55 +0000 (0:00:01.255) 0:00:09.826 **** 2025-09-18 00:53:29.163975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164088 | orchestrator | 2025-09-18 00:53:29.164099 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-18 00:53:29.164110 | orchestrator | Thursday 18 September 2025 00:50:57 +0000 (0:00:02.250) 0:00:12.077 **** 2025-09-18 00:53:29.164121 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164132 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:29.164143 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:29.164153 | orchestrator | 2025-09-18 00:53:29.164164 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-18 00:53:29.164175 | orchestrator | Thursday 18 September 2025 00:51:00 +0000 (0:00:02.553) 0:00:14.630 **** 2025-09-18 00:53:29.164186 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164196 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:29.164207 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:29.164218 | orchestrator | 2025-09-18 00:53:29.164228 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-18 00:53:29.164239 | orchestrator | Thursday 18 September 2025 00:51:02 +0000 (0:00:01.898) 0:00:16.529 **** 2025-09-18 00:53:29.164250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 00:53:29.164323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 00:53:29.164375 | orchestrator | 2025-09-18 00:53:29.164386 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 00:53:29.164397 | orchestrator | Thursday 18 September 2025 00:51:04 +0000 (0:00:02.381) 0:00:18.910 **** 2025-09-18 00:53:29.164408 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:29.164419 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:29.164430 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:29.164440 | orchestrator | 2025-09-18 00:53:29.164451 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 00:53:29.164462 | orchestrator | Thursday 18 September 2025 00:51:04 +0000 (0:00:00.272) 0:00:19.183 **** 2025-09-18 00:53:29.164473 | orchestrator | 2025-09-18 00:53:29.164483 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 00:53:29.164494 | orchestrator | Thursday 18 September 2025 00:51:04 +0000 (0:00:00.061) 0:00:19.244 **** 2025-09-18 00:53:29.164505 | orchestrator | 2025-09-18 00:53:29.164516 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 00:53:29.164526 | orchestrator | Thursday 18 September 2025 00:51:04 +0000 (0:00:00.104) 0:00:19.349 **** 2025-09-18 00:53:29.164537 | orchestrator | 2025-09-18 00:53:29.164548 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-18 00:53:29.164558 | orchestrator | Thursday 18 September 2025 00:51:04 +0000 (0:00:00.063) 0:00:19.412 **** 2025-09-18 00:53:29.164569 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:29.164580 | orchestrator | 2025-09-18 00:53:29.164591 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-18 00:53:29.164602 | orchestrator | Thursday 18 September 2025 00:51:05 +0000 (0:00:00.182) 0:00:19.595 **** 2025-09-18 00:53:29.164612 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:29.164623 | orchestrator | 2025-09-18 00:53:29.164634 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-18 00:53:29.164656 | orchestrator | Thursday 18 September 2025 00:51:05 +0000 (0:00:00.484) 0:00:20.080 **** 2025-09-18 00:53:29.164667 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164678 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:29.164688 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:29.164699 | orchestrator | 2025-09-18 00:53:29.164710 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-18 00:53:29.164721 | orchestrator | Thursday 18 September 2025 00:52:03 +0000 (0:00:57.583) 0:01:17.663 **** 2025-09-18 00:53:29.164737 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164748 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:29.164759 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:29.164769 | orchestrator | 2025-09-18 00:53:29.164780 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 00:53:29.164791 | orchestrator | Thursday 18 September 2025 00:53:14 +0000 (0:01:11.780) 0:02:29.444 **** 2025-09-18 00:53:29.164802 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:29.164812 | orchestrator | 2025-09-18 00:53:29.164823 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-18 00:53:29.164833 | orchestrator | Thursday 18 September 2025 00:53:15 +0000 (0:00:00.529) 0:02:29.974 **** 2025-09-18 00:53:29.164844 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:29.164855 | orchestrator | 2025-09-18 00:53:29.164866 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-18 00:53:29.164877 | orchestrator | Thursday 18 September 2025 00:53:18 +0000 (0:00:02.842) 0:02:32.817 **** 2025-09-18 00:53:29.164887 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:29.164905 | orchestrator | 2025-09-18 00:53:29.164916 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-18 00:53:29.164926 | orchestrator | Thursday 18 September 2025 00:53:20 +0000 (0:00:02.336) 0:02:35.153 **** 2025-09-18 00:53:29.164937 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164948 | orchestrator | 2025-09-18 00:53:29.164959 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-18 00:53:29.164970 | orchestrator | Thursday 18 September 2025 00:53:23 +0000 (0:00:03.204) 0:02:38.357 **** 2025-09-18 00:53:29.164980 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:29.164991 | orchestrator | 2025-09-18 00:53:29.165002 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:53:29.165014 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 00:53:29.165026 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:53:29.165037 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 00:53:29.165048 | orchestrator | 2025-09-18 00:53:29.165059 | orchestrator | 2025-09-18 00:53:29.165070 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:53:29.165086 | orchestrator | Thursday 18 September 2025 00:53:26 +0000 (0:00:02.542) 0:02:40.900 **** 2025-09-18 00:53:29.165097 | orchestrator | =============================================================================== 2025-09-18 00:53:29.165108 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.78s 2025-09-18 00:53:29.165118 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.58s 2025-09-18 00:53:29.165129 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.20s 2025-09-18 00:53:29.165140 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.84s 2025-09-18 00:53:29.165151 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.71s 2025-09-18 00:53:29.165161 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.55s 2025-09-18 00:53:29.165172 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.54s 2025-09-18 00:53:29.165182 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.38s 2025-09-18 00:53:29.165193 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2025-09-18 00:53:29.165204 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.25s 2025-09-18 00:53:29.165214 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.90s 2025-09-18 00:53:29.165225 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.86s 2025-09-18 00:53:29.165236 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.26s 2025-09-18 00:53:29.165246 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.21s 2025-09-18 00:53:29.165257 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-09-18 00:53:29.165268 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2025-09-18 00:53:29.165278 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-18 00:53:29.165289 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-18 00:53:29.165317 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.49s 2025-09-18 00:53:29.165328 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-18 00:53:29.165339 | orchestrator | 2025-09-18 00:53:29 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:29.165350 | orchestrator | 2025-09-18 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:32.218553 | orchestrator | 2025-09-18 00:53:32 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:32.220090 | orchestrator | 2025-09-18 00:53:32 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:32.220162 | orchestrator | 2025-09-18 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:35.268438 | orchestrator | 2025-09-18 00:53:35 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:35.270448 | orchestrator | 2025-09-18 00:53:35 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:35.270505 | orchestrator | 2025-09-18 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:38.322985 | orchestrator | 2025-09-18 00:53:38 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:38.324111 | orchestrator | 2025-09-18 00:53:38 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:38.324233 | orchestrator | 2025-09-18 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:41.368591 | orchestrator | 2025-09-18 00:53:41 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:41.369078 | orchestrator | 2025-09-18 00:53:41 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:41.369238 | orchestrator | 2025-09-18 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:44.412822 | orchestrator | 2025-09-18 00:53:44 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:44.415626 | orchestrator | 2025-09-18 00:53:44 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:44.415692 | orchestrator | 2025-09-18 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:47.453460 | orchestrator | 2025-09-18 00:53:47 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:47.453565 | orchestrator | 2025-09-18 00:53:47 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:47.453580 | orchestrator | 2025-09-18 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:50.502271 | orchestrator | 2025-09-18 00:53:50 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:50.503098 | orchestrator | 2025-09-18 00:53:50 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:50.503540 | orchestrator | 2025-09-18 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:53.553460 | orchestrator | 2025-09-18 00:53:53 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:53.554276 | orchestrator | 2025-09-18 00:53:53 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:53.554447 | orchestrator | 2025-09-18 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:56.597568 | orchestrator | 2025-09-18 00:53:56 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state STARTED 2025-09-18 00:53:56.601006 | orchestrator | 2025-09-18 00:53:56 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:56.601042 | orchestrator | 2025-09-18 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:59.649208 | orchestrator | 2025-09-18 00:53:59 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:53:59.649393 | orchestrator | 2025-09-18 00:53:59 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:53:59.649965 | orchestrator | 2025-09-18 00:53:59 | INFO  | Task 46e7a459-9e0b-49b0-ab55-5de00ab2a4fe is in state SUCCESS 2025-09-18 00:53:59.653117 | orchestrator | 2025-09-18 00:53:59.653156 | orchestrator | 2025-09-18 00:53:59.653168 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-18 00:53:59.653180 | orchestrator | 2025-09-18 00:53:59.653191 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-18 00:53:59.653203 | orchestrator | Thursday 18 September 2025 00:50:45 +0000 (0:00:00.133) 0:00:00.133 **** 2025-09-18 00:53:59.653214 | orchestrator | ok: [localhost] => { 2025-09-18 00:53:59.653226 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-18 00:53:59.653238 | orchestrator | } 2025-09-18 00:53:59.653249 | orchestrator | 2025-09-18 00:53:59.653260 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-18 00:53:59.653271 | orchestrator | Thursday 18 September 2025 00:50:45 +0000 (0:00:00.061) 0:00:00.194 **** 2025-09-18 00:53:59.653282 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-18 00:53:59.653321 | orchestrator | ...ignoring 2025-09-18 00:53:59.653335 | orchestrator | 2025-09-18 00:53:59.653346 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-18 00:53:59.653357 | orchestrator | Thursday 18 September 2025 00:50:48 +0000 (0:00:02.873) 0:00:03.068 **** 2025-09-18 00:53:59.653368 | orchestrator | skipping: [localhost] 2025-09-18 00:53:59.653379 | orchestrator | 2025-09-18 00:53:59.653406 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-18 00:53:59.653417 | orchestrator | Thursday 18 September 2025 00:50:48 +0000 (0:00:00.057) 0:00:03.125 **** 2025-09-18 00:53:59.653428 | orchestrator | ok: [localhost] 2025-09-18 00:53:59.653439 | orchestrator | 2025-09-18 00:53:59.653450 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:53:59.653460 | orchestrator | 2025-09-18 00:53:59.653471 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:53:59.653482 | orchestrator | Thursday 18 September 2025 00:50:48 +0000 (0:00:00.157) 0:00:03.282 **** 2025-09-18 00:53:59.653493 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.653504 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.653514 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.653525 | orchestrator | 2025-09-18 00:53:59.653536 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:53:59.653547 | orchestrator | Thursday 18 September 2025 00:50:49 +0000 (0:00:00.324) 0:00:03.607 **** 2025-09-18 00:53:59.653557 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-18 00:53:59.653569 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-18 00:53:59.653579 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-18 00:53:59.653590 | orchestrator | 2025-09-18 00:53:59.653601 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-18 00:53:59.653611 | orchestrator | 2025-09-18 00:53:59.653622 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-18 00:53:59.653633 | orchestrator | Thursday 18 September 2025 00:50:49 +0000 (0:00:00.642) 0:00:04.250 **** 2025-09-18 00:53:59.653644 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 00:53:59.653654 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-18 00:53:59.653665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-18 00:53:59.653675 | orchestrator | 2025-09-18 00:53:59.653686 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 00:53:59.653697 | orchestrator | Thursday 18 September 2025 00:50:50 +0000 (0:00:00.380) 0:00:04.631 **** 2025-09-18 00:53:59.653707 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:59.653737 | orchestrator | 2025-09-18 00:53:59.653749 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-18 00:53:59.653761 | orchestrator | Thursday 18 September 2025 00:50:50 +0000 (0:00:00.570) 0:00:05.201 **** 2025-09-18 00:53:59.653795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.653821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.653837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.653858 | orchestrator | 2025-09-18 00:53:59.653879 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-18 00:53:59.653892 | orchestrator | Thursday 18 September 2025 00:50:53 +0000 (0:00:03.093) 0:00:08.294 **** 2025-09-18 00:53:59.653905 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.653918 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.653930 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.653943 | orchestrator | 2025-09-18 00:53:59.653955 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-18 00:53:59.653967 | orchestrator | Thursday 18 September 2025 00:50:54 +0000 (0:00:00.843) 0:00:09.138 **** 2025-09-18 00:53:59.653980 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.653993 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.654005 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.654063 | orchestrator | 2025-09-18 00:53:59.654079 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-18 00:53:59.654091 | orchestrator | Thursday 18 September 2025 00:50:56 +0000 (0:00:01.522) 0:00:10.660 **** 2025-09-18 00:53:59.654111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654179 | orchestrator | 2025-09-18 00:53:59.654190 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-18 00:53:59.654201 | orchestrator | Thursday 18 September 2025 00:50:59 +0000 (0:00:03.573) 0:00:14.234 **** 2025-09-18 00:53:59.654212 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.654223 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.654233 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.654244 | orchestrator | 2025-09-18 00:53:59.654255 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-18 00:53:59.654266 | orchestrator | Thursday 18 September 2025 00:51:00 +0000 (0:00:01.020) 0:00:15.254 **** 2025-09-18 00:53:59.654276 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.654287 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:59.654318 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:59.654329 | orchestrator | 2025-09-18 00:53:59.654340 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 00:53:59.654351 | orchestrator | Thursday 18 September 2025 00:51:05 +0000 (0:00:04.377) 0:00:19.631 **** 2025-09-18 00:53:59.654362 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:59.654373 | orchestrator | 2025-09-18 00:53:59.654383 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-18 00:53:59.654394 | orchestrator | Thursday 18 September 2025 00:51:05 +0000 (0:00:00.504) 0:00:20.136 **** 2025-09-18 00:53:59.654415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654428 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.654445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654465 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.654484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654496 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.654507 | orchestrator | 2025-09-18 00:53:59.654517 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-18 00:53:59.654528 | orchestrator | Thursday 18 September 2025 00:51:08 +0000 (0:00:03.050) 0:00:23.186 **** 2025-09-18 00:53:59.654544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654564 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.654581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654593 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.654609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654634 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.654645 | orchestrator | 2025-09-18 00:53:59.654655 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-18 00:53:59.654666 | orchestrator | Thursday 18 September 2025 00:51:10 +0000 (0:00:02.111) 0:00:25.297 **** 2025-09-18 00:53:59.654678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654690 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.654714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654734 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.654746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 00:53:59.654757 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.654768 | orchestrator | 2025-09-18 00:53:59.654779 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-18 00:53:59.654790 | orchestrator | Thursday 18 September 2025 00:51:13 +0000 (0:00:02.507) 0:00:27.805 **** 2025-09-18 00:53:59.654807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-09-18 00:53:59 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:53:59.654819 | orchestrator | 2025-09-18 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:53:59.654836 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 00:53:59.654902 | orchestrator | 2025-09-18 00:53:59.654913 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-18 00:53:59.654924 | orchestrator | Thursday 18 September 2025 00:51:16 +0000 (0:00:03.637) 0:00:31.442 **** 2025-09-18 00:53:59.654935 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.654946 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:59.654956 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:59.654967 | orchestrator | 2025-09-18 00:53:59.654977 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-18 00:53:59.654988 | orchestrator | Thursday 18 September 2025 00:51:18 +0000 (0:00:01.074) 0:00:32.517 **** 2025-09-18 00:53:59.654999 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655009 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.655020 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.655031 | orchestrator | 2025-09-18 00:53:59.655042 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-18 00:53:59.655053 | orchestrator | Thursday 18 September 2025 00:51:19 +0000 (0:00:00.991) 0:00:33.508 **** 2025-09-18 00:53:59.655063 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655074 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.655085 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.655095 | orchestrator | 2025-09-18 00:53:59.655106 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-18 00:53:59.655117 | orchestrator | Thursday 18 September 2025 00:51:19 +0000 (0:00:00.676) 0:00:34.185 **** 2025-09-18 00:53:59.655128 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-18 00:53:59.655140 | orchestrator | ...ignoring 2025-09-18 00:53:59.655151 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-18 00:53:59.655162 | orchestrator | ...ignoring 2025-09-18 00:53:59.655172 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-18 00:53:59.655183 | orchestrator | ...ignoring 2025-09-18 00:53:59.655194 | orchestrator | 2025-09-18 00:53:59.655205 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-18 00:53:59.655215 | orchestrator | Thursday 18 September 2025 00:51:30 +0000 (0:00:10.900) 0:00:45.085 **** 2025-09-18 00:53:59.655226 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655237 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.655247 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.655258 | orchestrator | 2025-09-18 00:53:59.655269 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-18 00:53:59.655279 | orchestrator | Thursday 18 September 2025 00:51:31 +0000 (0:00:00.453) 0:00:45.539 **** 2025-09-18 00:53:59.655304 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.655315 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655333 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655343 | orchestrator | 2025-09-18 00:53:59.655354 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-18 00:53:59.655365 | orchestrator | Thursday 18 September 2025 00:51:31 +0000 (0:00:00.631) 0:00:46.170 **** 2025-09-18 00:53:59.655376 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.655387 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655398 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655409 | orchestrator | 2025-09-18 00:53:59.655419 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-18 00:53:59.655430 | orchestrator | Thursday 18 September 2025 00:51:32 +0000 (0:00:00.431) 0:00:46.602 **** 2025-09-18 00:53:59.655441 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.655452 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655462 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655473 | orchestrator | 2025-09-18 00:53:59.655484 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-18 00:53:59.655495 | orchestrator | Thursday 18 September 2025 00:51:32 +0000 (0:00:00.461) 0:00:47.064 **** 2025-09-18 00:53:59.655506 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655516 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.655533 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.655544 | orchestrator | 2025-09-18 00:53:59.655555 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-18 00:53:59.655566 | orchestrator | Thursday 18 September 2025 00:51:33 +0000 (0:00:00.472) 0:00:47.536 **** 2025-09-18 00:53:59.655576 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.655587 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655598 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655609 | orchestrator | 2025-09-18 00:53:59.655619 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 00:53:59.655630 | orchestrator | Thursday 18 September 2025 00:51:33 +0000 (0:00:00.697) 0:00:48.234 **** 2025-09-18 00:53:59.655641 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655652 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655663 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-18 00:53:59.655673 | orchestrator | 2025-09-18 00:53:59.655684 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-18 00:53:59.655695 | orchestrator | Thursday 18 September 2025 00:51:34 +0000 (0:00:00.435) 0:00:48.669 **** 2025-09-18 00:53:59.655705 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.655716 | orchestrator | 2025-09-18 00:53:59.655727 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-18 00:53:59.655742 | orchestrator | Thursday 18 September 2025 00:51:44 +0000 (0:00:10.174) 0:00:58.844 **** 2025-09-18 00:53:59.655754 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655764 | orchestrator | 2025-09-18 00:53:59.655775 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 00:53:59.655786 | orchestrator | Thursday 18 September 2025 00:51:44 +0000 (0:00:00.144) 0:00:58.988 **** 2025-09-18 00:53:59.655797 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.655807 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.655818 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.655829 | orchestrator | 2025-09-18 00:53:59.655839 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-18 00:53:59.655850 | orchestrator | Thursday 18 September 2025 00:51:45 +0000 (0:00:01.041) 0:01:00.030 **** 2025-09-18 00:53:59.655861 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.655871 | orchestrator | 2025-09-18 00:53:59.655882 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-18 00:53:59.655893 | orchestrator | Thursday 18 September 2025 00:51:53 +0000 (0:00:07.841) 0:01:07.871 **** 2025-09-18 00:53:59.655904 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655914 | orchestrator | 2025-09-18 00:53:59.655931 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-18 00:53:59.655942 | orchestrator | Thursday 18 September 2025 00:51:54 +0000 (0:00:01.585) 0:01:09.456 **** 2025-09-18 00:53:59.655953 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.655964 | orchestrator | 2025-09-18 00:53:59.655975 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-18 00:53:59.655986 | orchestrator | Thursday 18 September 2025 00:51:57 +0000 (0:00:02.454) 0:01:11.911 **** 2025-09-18 00:53:59.655996 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.656007 | orchestrator | 2025-09-18 00:53:59.656018 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-18 00:53:59.656029 | orchestrator | Thursday 18 September 2025 00:51:57 +0000 (0:00:00.130) 0:01:12.042 **** 2025-09-18 00:53:59.656040 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.656050 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.656061 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.656072 | orchestrator | 2025-09-18 00:53:59.656083 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-18 00:53:59.656094 | orchestrator | Thursday 18 September 2025 00:51:57 +0000 (0:00:00.319) 0:01:12.361 **** 2025-09-18 00:53:59.656105 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.656116 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-18 00:53:59.656126 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:59.656137 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:59.656148 | orchestrator | 2025-09-18 00:53:59.656158 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-18 00:53:59.656169 | orchestrator | skipping: no hosts matched 2025-09-18 00:53:59.656180 | orchestrator | 2025-09-18 00:53:59.656191 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-18 00:53:59.656201 | orchestrator | 2025-09-18 00:53:59.656212 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 00:53:59.656223 | orchestrator | Thursday 18 September 2025 00:51:58 +0000 (0:00:00.556) 0:01:12.918 **** 2025-09-18 00:53:59.656234 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:53:59.656244 | orchestrator | 2025-09-18 00:53:59.656255 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 00:53:59.656266 | orchestrator | Thursday 18 September 2025 00:52:22 +0000 (0:00:24.437) 0:01:37.356 **** 2025-09-18 00:53:59.656277 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.656287 | orchestrator | 2025-09-18 00:53:59.656358 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 00:53:59.656370 | orchestrator | Thursday 18 September 2025 00:52:38 +0000 (0:00:15.681) 0:01:53.037 **** 2025-09-18 00:53:59.656381 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.656392 | orchestrator | 2025-09-18 00:53:59.656402 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-18 00:53:59.656413 | orchestrator | 2025-09-18 00:53:59.656424 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 00:53:59.656435 | orchestrator | Thursday 18 September 2025 00:52:41 +0000 (0:00:02.450) 0:01:55.488 **** 2025-09-18 00:53:59.656446 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:53:59.656456 | orchestrator | 2025-09-18 00:53:59.656467 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 00:53:59.656478 | orchestrator | Thursday 18 September 2025 00:53:00 +0000 (0:00:19.431) 0:02:14.919 **** 2025-09-18 00:53:59.656489 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.656500 | orchestrator | 2025-09-18 00:53:59.656517 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 00:53:59.656528 | orchestrator | Thursday 18 September 2025 00:53:20 +0000 (0:00:20.544) 0:02:35.464 **** 2025-09-18 00:53:59.656539 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.656550 | orchestrator | 2025-09-18 00:53:59.656561 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-18 00:53:59.656579 | orchestrator | 2025-09-18 00:53:59.656590 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 00:53:59.656601 | orchestrator | Thursday 18 September 2025 00:53:23 +0000 (0:00:02.595) 0:02:38.060 **** 2025-09-18 00:53:59.656612 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.656623 | orchestrator | 2025-09-18 00:53:59.656634 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 00:53:59.656645 | orchestrator | Thursday 18 September 2025 00:53:35 +0000 (0:00:12.160) 0:02:50.221 **** 2025-09-18 00:53:59.656655 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.656666 | orchestrator | 2025-09-18 00:53:59.656677 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 00:53:59.656688 | orchestrator | Thursday 18 September 2025 00:53:40 +0000 (0:00:04.627) 0:02:54.848 **** 2025-09-18 00:53:59.656698 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.656709 | orchestrator | 2025-09-18 00:53:59.656720 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-18 00:53:59.656731 | orchestrator | 2025-09-18 00:53:59.656742 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-18 00:53:59.656758 | orchestrator | Thursday 18 September 2025 00:53:43 +0000 (0:00:02.768) 0:02:57.617 **** 2025-09-18 00:53:59.656769 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:53:59.656780 | orchestrator | 2025-09-18 00:53:59.656791 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-18 00:53:59.656802 | orchestrator | Thursday 18 September 2025 00:53:43 +0000 (0:00:00.553) 0:02:58.171 **** 2025-09-18 00:53:59.656813 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.656824 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.656834 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.656845 | orchestrator | 2025-09-18 00:53:59.656856 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-18 00:53:59.656866 | orchestrator | Thursday 18 September 2025 00:53:46 +0000 (0:00:02.448) 0:03:00.619 **** 2025-09-18 00:53:59.656876 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.656885 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.656895 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.656904 | orchestrator | 2025-09-18 00:53:59.656914 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-18 00:53:59.656924 | orchestrator | Thursday 18 September 2025 00:53:48 +0000 (0:00:02.541) 0:03:03.161 **** 2025-09-18 00:53:59.656933 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.656943 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.656952 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.656962 | orchestrator | 2025-09-18 00:53:59.656972 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-18 00:53:59.656981 | orchestrator | Thursday 18 September 2025 00:53:50 +0000 (0:00:02.226) 0:03:05.387 **** 2025-09-18 00:53:59.656991 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.657001 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.657010 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:53:59.657020 | orchestrator | 2025-09-18 00:53:59.657029 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-18 00:53:59.657039 | orchestrator | Thursday 18 September 2025 00:53:53 +0000 (0:00:02.157) 0:03:07.544 **** 2025-09-18 00:53:59.657049 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:53:59.657058 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:53:59.657068 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:53:59.657077 | orchestrator | 2025-09-18 00:53:59.657087 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-18 00:53:59.657096 | orchestrator | Thursday 18 September 2025 00:53:55 +0000 (0:00:02.906) 0:03:10.451 **** 2025-09-18 00:53:59.657106 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:53:59.657115 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:53:59.657125 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:53:59.657143 | orchestrator | 2025-09-18 00:53:59.657153 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:53:59.657163 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-18 00:53:59.657173 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-18 00:53:59.657184 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-18 00:53:59.657194 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-18 00:53:59.657204 | orchestrator | 2025-09-18 00:53:59.657213 | orchestrator | 2025-09-18 00:53:59.657223 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:53:59.657233 | orchestrator | Thursday 18 September 2025 00:53:56 +0000 (0:00:00.423) 0:03:10.875 **** 2025-09-18 00:53:59.657242 | orchestrator | =============================================================================== 2025-09-18 00:53:59.657252 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.87s 2025-09-18 00:53:59.657261 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.23s 2025-09-18 00:53:59.657271 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.16s 2025-09-18 00:53:59.657280 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-09-18 00:53:59.657312 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.17s 2025-09-18 00:53:59.657323 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.84s 2025-09-18 00:53:59.657332 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.05s 2025-09-18 00:53:59.657342 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2025-09-18 00:53:59.657351 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.38s 2025-09-18 00:53:59.657361 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.64s 2025-09-18 00:53:59.657370 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.57s 2025-09-18 00:53:59.657380 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.09s 2025-09-18 00:53:59.657389 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.05s 2025-09-18 00:53:59.657399 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.91s 2025-09-18 00:53:59.657408 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-09-18 00:53:59.657417 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.77s 2025-09-18 00:53:59.657431 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.54s 2025-09-18 00:53:59.657441 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.51s 2025-09-18 00:53:59.657450 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2025-09-18 00:53:59.657460 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.45s 2025-09-18 00:54:02.697249 | orchestrator | 2025-09-18 00:54:02 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:02.698953 | orchestrator | 2025-09-18 00:54:02 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:02.701120 | orchestrator | 2025-09-18 00:54:02 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:02.701160 | orchestrator | 2025-09-18 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:05.737938 | orchestrator | 2025-09-18 00:54:05 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:05.739523 | orchestrator | 2025-09-18 00:54:05 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:05.741063 | orchestrator | 2025-09-18 00:54:05 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:05.741100 | orchestrator | 2025-09-18 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:08.779219 | orchestrator | 2025-09-18 00:54:08 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:08.779758 | orchestrator | 2025-09-18 00:54:08 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:08.783148 | orchestrator | 2025-09-18 00:54:08 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:08.783177 | orchestrator | 2025-09-18 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:11.827751 | orchestrator | 2025-09-18 00:54:11 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:11.827848 | orchestrator | 2025-09-18 00:54:11 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:11.829544 | orchestrator | 2025-09-18 00:54:11 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:11.829567 | orchestrator | 2025-09-18 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:14.871468 | orchestrator | 2025-09-18 00:54:14 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:14.873017 | orchestrator | 2025-09-18 00:54:14 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:14.881503 | orchestrator | 2025-09-18 00:54:14 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:14.881545 | orchestrator | 2025-09-18 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:17.924485 | orchestrator | 2025-09-18 00:54:17 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:17.925068 | orchestrator | 2025-09-18 00:54:17 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:17.925100 | orchestrator | 2025-09-18 00:54:17 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:17.925115 | orchestrator | 2025-09-18 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:20.972234 | orchestrator | 2025-09-18 00:54:20 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:20.973255 | orchestrator | 2025-09-18 00:54:20 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:20.976734 | orchestrator | 2025-09-18 00:54:20 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:20.976760 | orchestrator | 2025-09-18 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:24.020769 | orchestrator | 2025-09-18 00:54:24 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:24.021216 | orchestrator | 2025-09-18 00:54:24 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:24.022209 | orchestrator | 2025-09-18 00:54:24 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:24.022242 | orchestrator | 2025-09-18 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:27.076345 | orchestrator | 2025-09-18 00:54:27 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:27.076828 | orchestrator | 2025-09-18 00:54:27 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:27.076964 | orchestrator | 2025-09-18 00:54:27 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:27.076983 | orchestrator | 2025-09-18 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:30.117994 | orchestrator | 2025-09-18 00:54:30 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:30.118478 | orchestrator | 2025-09-18 00:54:30 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:30.119517 | orchestrator | 2025-09-18 00:54:30 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:30.119528 | orchestrator | 2025-09-18 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:33.166464 | orchestrator | 2025-09-18 00:54:33 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:33.167801 | orchestrator | 2025-09-18 00:54:33 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:33.170240 | orchestrator | 2025-09-18 00:54:33 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:33.170275 | orchestrator | 2025-09-18 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:36.211796 | orchestrator | 2025-09-18 00:54:36 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:36.214413 | orchestrator | 2025-09-18 00:54:36 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:36.216376 | orchestrator | 2025-09-18 00:54:36 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:36.216561 | orchestrator | 2025-09-18 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:39.270823 | orchestrator | 2025-09-18 00:54:39 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:39.272454 | orchestrator | 2025-09-18 00:54:39 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:39.273665 | orchestrator | 2025-09-18 00:54:39 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:39.273705 | orchestrator | 2025-09-18 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:42.325996 | orchestrator | 2025-09-18 00:54:42 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:42.327458 | orchestrator | 2025-09-18 00:54:42 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:42.330430 | orchestrator | 2025-09-18 00:54:42 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:42.330459 | orchestrator | 2025-09-18 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:45.382101 | orchestrator | 2025-09-18 00:54:45 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:45.383957 | orchestrator | 2025-09-18 00:54:45 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:45.386057 | orchestrator | 2025-09-18 00:54:45 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:45.386094 | orchestrator | 2025-09-18 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:48.431185 | orchestrator | 2025-09-18 00:54:48 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:48.433008 | orchestrator | 2025-09-18 00:54:48 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:48.434491 | orchestrator | 2025-09-18 00:54:48 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:48.434765 | orchestrator | 2025-09-18 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:51.482619 | orchestrator | 2025-09-18 00:54:51 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:51.484484 | orchestrator | 2025-09-18 00:54:51 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:51.486150 | orchestrator | 2025-09-18 00:54:51 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:51.486179 | orchestrator | 2025-09-18 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:54.535484 | orchestrator | 2025-09-18 00:54:54 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:54.538832 | orchestrator | 2025-09-18 00:54:54 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:54.542129 | orchestrator | 2025-09-18 00:54:54 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:54.542160 | orchestrator | 2025-09-18 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:54:57.595384 | orchestrator | 2025-09-18 00:54:57 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:54:57.596504 | orchestrator | 2025-09-18 00:54:57 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:54:57.599029 | orchestrator | 2025-09-18 00:54:57 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:54:57.599129 | orchestrator | 2025-09-18 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:00.639512 | orchestrator | 2025-09-18 00:55:00 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:00.640314 | orchestrator | 2025-09-18 00:55:00 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:00.644254 | orchestrator | 2025-09-18 00:55:00 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:55:00.644294 | orchestrator | 2025-09-18 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:03.691603 | orchestrator | 2025-09-18 00:55:03 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:03.692642 | orchestrator | 2025-09-18 00:55:03 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:03.694414 | orchestrator | 2025-09-18 00:55:03 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:55:03.694448 | orchestrator | 2025-09-18 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:06.734666 | orchestrator | 2025-09-18 00:55:06 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:06.736196 | orchestrator | 2025-09-18 00:55:06 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:06.741499 | orchestrator | 2025-09-18 00:55:06 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state STARTED 2025-09-18 00:55:06.741576 | orchestrator | 2025-09-18 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:09.788059 | orchestrator | 2025-09-18 00:55:09 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:09.788694 | orchestrator | 2025-09-18 00:55:09 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:09.790888 | orchestrator | 2025-09-18 00:55:09 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:09.794097 | orchestrator | 2025-09-18 00:55:09 | INFO  | Task 1c9f14c5-9bb6-4c14-8125-a0baab805784 is in state SUCCESS 2025-09-18 00:55:09.795518 | orchestrator | 2025-09-18 00:55:09.795977 | orchestrator | 2025-09-18 00:55:09.795995 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-18 00:55:09.796007 | orchestrator | 2025-09-18 00:55:09.796018 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-18 00:55:09.796231 | orchestrator | Thursday 18 September 2025 00:52:56 +0000 (0:00:00.437) 0:00:00.437 **** 2025-09-18 00:55:09.796248 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:55:09.796261 | orchestrator | 2025-09-18 00:55:09.796273 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-18 00:55:09.796347 | orchestrator | Thursday 18 September 2025 00:52:57 +0000 (0:00:00.535) 0:00:00.972 **** 2025-09-18 00:55:09.796360 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796372 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796383 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796394 | orchestrator | 2025-09-18 00:55:09.796406 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-18 00:55:09.796418 | orchestrator | Thursday 18 September 2025 00:52:57 +0000 (0:00:00.583) 0:00:01.556 **** 2025-09-18 00:55:09.796429 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796441 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796452 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796463 | orchestrator | 2025-09-18 00:55:09.796475 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-18 00:55:09.796486 | orchestrator | Thursday 18 September 2025 00:52:57 +0000 (0:00:00.264) 0:00:01.820 **** 2025-09-18 00:55:09.796498 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796509 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796521 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796533 | orchestrator | 2025-09-18 00:55:09.796544 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-18 00:55:09.796556 | orchestrator | Thursday 18 September 2025 00:52:58 +0000 (0:00:00.772) 0:00:02.593 **** 2025-09-18 00:55:09.796566 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796576 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796586 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796596 | orchestrator | 2025-09-18 00:55:09.796607 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-18 00:55:09.796630 | orchestrator | Thursday 18 September 2025 00:52:59 +0000 (0:00:00.279) 0:00:02.873 **** 2025-09-18 00:55:09.796641 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796651 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796661 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796671 | orchestrator | 2025-09-18 00:55:09.796681 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-18 00:55:09.796691 | orchestrator | Thursday 18 September 2025 00:52:59 +0000 (0:00:00.269) 0:00:03.143 **** 2025-09-18 00:55:09.796702 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796712 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796722 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796732 | orchestrator | 2025-09-18 00:55:09.796742 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-18 00:55:09.796753 | orchestrator | Thursday 18 September 2025 00:52:59 +0000 (0:00:00.288) 0:00:03.431 **** 2025-09-18 00:55:09.796763 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.796774 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.796784 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.796795 | orchestrator | 2025-09-18 00:55:09.796805 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-18 00:55:09.796831 | orchestrator | Thursday 18 September 2025 00:52:59 +0000 (0:00:00.371) 0:00:03.803 **** 2025-09-18 00:55:09.796842 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796852 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.796865 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.796877 | orchestrator | 2025-09-18 00:55:09.796888 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-18 00:55:09.796900 | orchestrator | Thursday 18 September 2025 00:53:00 +0000 (0:00:00.257) 0:00:04.061 **** 2025-09-18 00:55:09.796912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:55:09.796924 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:55:09.796936 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:55:09.796947 | orchestrator | 2025-09-18 00:55:09.796959 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-18 00:55:09.796970 | orchestrator | Thursday 18 September 2025 00:53:00 +0000 (0:00:00.609) 0:00:04.670 **** 2025-09-18 00:55:09.796982 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.796994 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.797005 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.797016 | orchestrator | 2025-09-18 00:55:09.797028 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-18 00:55:09.797040 | orchestrator | Thursday 18 September 2025 00:53:01 +0000 (0:00:00.394) 0:00:05.065 **** 2025-09-18 00:55:09.797051 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:55:09.797064 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:55:09.797075 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:55:09.797087 | orchestrator | 2025-09-18 00:55:09.797099 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-18 00:55:09.797110 | orchestrator | Thursday 18 September 2025 00:53:03 +0000 (0:00:02.071) 0:00:07.136 **** 2025-09-18 00:55:09.797123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 00:55:09.797135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 00:55:09.797147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 00:55:09.797159 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797171 | orchestrator | 2025-09-18 00:55:09.797183 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-18 00:55:09.797234 | orchestrator | Thursday 18 September 2025 00:53:03 +0000 (0:00:00.400) 0:00:07.536 **** 2025-09-18 00:55:09.797249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797307 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797318 | orchestrator | 2025-09-18 00:55:09.797327 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-18 00:55:09.797337 | orchestrator | Thursday 18 September 2025 00:53:04 +0000 (0:00:00.840) 0:00:08.377 **** 2025-09-18 00:55:09.797348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.797393 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797402 | orchestrator | 2025-09-18 00:55:09.797412 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-18 00:55:09.797422 | orchestrator | Thursday 18 September 2025 00:53:04 +0000 (0:00:00.155) 0:00:08.533 **** 2025-09-18 00:55:09.797433 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fbe2697e1fb6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-18 00:53:01.853498', 'end': '2025-09-18 00:53:01.899501', 'delta': '0:00:00.046003', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fbe2697e1fb6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-18 00:55:09.797445 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ac65095eaa60', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-18 00:53:02.609812', 'end': '2025-09-18 00:53:02.642827', 'delta': '0:00:00.033015', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ac65095eaa60'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-18 00:55:09.797485 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7334f9e4d9b7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-18 00:53:03.138362', 'end': '2025-09-18 00:53:03.170891', 'delta': '0:00:00.032529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7334f9e4d9b7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-18 00:55:09.797497 | orchestrator | 2025-09-18 00:55:09.797507 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-18 00:55:09.797516 | orchestrator | Thursday 18 September 2025 00:53:05 +0000 (0:00:00.385) 0:00:08.919 **** 2025-09-18 00:55:09.797532 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.797542 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.797552 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.797561 | orchestrator | 2025-09-18 00:55:09.797570 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-18 00:55:09.797580 | orchestrator | Thursday 18 September 2025 00:53:05 +0000 (0:00:00.453) 0:00:09.372 **** 2025-09-18 00:55:09.797590 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-18 00:55:09.797599 | orchestrator | 2025-09-18 00:55:09.797609 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-18 00:55:09.797619 | orchestrator | Thursday 18 September 2025 00:53:07 +0000 (0:00:01.707) 0:00:11.080 **** 2025-09-18 00:55:09.797628 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797638 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.797647 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.797657 | orchestrator | 2025-09-18 00:55:09.797666 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-18 00:55:09.797680 | orchestrator | Thursday 18 September 2025 00:53:07 +0000 (0:00:00.297) 0:00:11.378 **** 2025-09-18 00:55:09.797690 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797700 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.797709 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.797719 | orchestrator | 2025-09-18 00:55:09.797728 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 00:55:09.797738 | orchestrator | Thursday 18 September 2025 00:53:07 +0000 (0:00:00.423) 0:00:11.801 **** 2025-09-18 00:55:09.797748 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797757 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.797767 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.797776 | orchestrator | 2025-09-18 00:55:09.797786 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-18 00:55:09.797796 | orchestrator | Thursday 18 September 2025 00:53:08 +0000 (0:00:00.478) 0:00:12.279 **** 2025-09-18 00:55:09.797805 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.797815 | orchestrator | 2025-09-18 00:55:09.797825 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-18 00:55:09.797834 | orchestrator | Thursday 18 September 2025 00:53:08 +0000 (0:00:00.140) 0:00:12.420 **** 2025-09-18 00:55:09.797844 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797853 | orchestrator | 2025-09-18 00:55:09.797863 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 00:55:09.797872 | orchestrator | Thursday 18 September 2025 00:53:08 +0000 (0:00:00.232) 0:00:12.652 **** 2025-09-18 00:55:09.797882 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797892 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.797901 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.797911 | orchestrator | 2025-09-18 00:55:09.797920 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-18 00:55:09.797930 | orchestrator | Thursday 18 September 2025 00:53:09 +0000 (0:00:00.306) 0:00:12.958 **** 2025-09-18 00:55:09.797939 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.797949 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.797958 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.797968 | orchestrator | 2025-09-18 00:55:09.797977 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-18 00:55:09.797987 | orchestrator | Thursday 18 September 2025 00:53:09 +0000 (0:00:00.318) 0:00:13.277 **** 2025-09-18 00:55:09.797997 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798006 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798051 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.798064 | orchestrator | 2025-09-18 00:55:09.798074 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-18 00:55:09.798083 | orchestrator | Thursday 18 September 2025 00:53:09 +0000 (0:00:00.515) 0:00:13.792 **** 2025-09-18 00:55:09.798099 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798108 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798118 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.798128 | orchestrator | 2025-09-18 00:55:09.798137 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-18 00:55:09.798147 | orchestrator | Thursday 18 September 2025 00:53:10 +0000 (0:00:00.321) 0:00:14.114 **** 2025-09-18 00:55:09.798156 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798166 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798175 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.798185 | orchestrator | 2025-09-18 00:55:09.798194 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-18 00:55:09.798204 | orchestrator | Thursday 18 September 2025 00:53:10 +0000 (0:00:00.317) 0:00:14.431 **** 2025-09-18 00:55:09.798213 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798223 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798233 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.798242 | orchestrator | 2025-09-18 00:55:09.798252 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-18 00:55:09.798307 | orchestrator | Thursday 18 September 2025 00:53:10 +0000 (0:00:00.323) 0:00:14.754 **** 2025-09-18 00:55:09.798320 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798329 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798339 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.798349 | orchestrator | 2025-09-18 00:55:09.798358 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-18 00:55:09.798368 | orchestrator | Thursday 18 September 2025 00:53:11 +0000 (0:00:00.517) 0:00:15.272 **** 2025-09-18 00:55:09.798379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655', 'dm-uuid-LVM-gJumQCyZ1bfxhO0dfjPwEejz9ohnhr3d478wME9KHSsMPzezVqwZlBzRBck7giHw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d', 'dm-uuid-LVM-C4siaejQmTKzx2KcnmVAte27Kk5gro23PrOOSEKuHroY5CBUeLj0Jw30TjqZQgJ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b', 'dm-uuid-LVM-LU2WChruXwDGJXhDT4p35rNV8sSdVPmlIbCWPRkS3bJzeJa8OYo8vVFzIQsRVwrj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4', 'dm-uuid-LVM-ej82I6MoUZWchGQS1y2ZyHJCdg8n8p3EWz8LoAAbpQlv51jBj80VxSmVjRuEteR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-68Bm8z-3zKE-kRH3-9AQX-alhg-bxaz-2X4H8K', 'scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca', 'scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-01LZMs-Pdh3-IDPz-xI2P-Fjst-4xgK-QgzMxM', 'scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738', 'scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97', 'scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798705 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.798716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qdztzf-yJQx-6QsS-ue8y-VY8R-Ex68-Rs4ML0', 'scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8', 'scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lOA0Q4-7mCN-oUtO-k87H-t1uw-28O9-bDn4PP', 'scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31', 'scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d', 'scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.798850 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.798860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5', 'dm-uuid-LVM-TH2vhzQ3frcs9a69TU5wE7rT1r26iytTTaI0d0Ks3AhpggiVBlIHs2kJM5ib59Hu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0', 'dm-uuid-LVM-qycmhnh5qlb9tVSHUxG1t8mss4Ah6MDvAYLOSvYJYOvvz5TVq9e3dFYRGrXLqJpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 00:55:09.798984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.799005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TE10AL-7Csv-4u2G-ozSb-13Za-yvZs-KCadDL', 'scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739', 'scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.799016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jfe1uI-hXch-v6I9-89UP-ov5N-PxM2-Ar1e3o', 'scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd', 'scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.799027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c', 'scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.799042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 00:55:09.799052 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.799062 | orchestrator | 2025-09-18 00:55:09.799072 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-18 00:55:09.799081 | orchestrator | Thursday 18 September 2025 00:53:11 +0000 (0:00:00.550) 0:00:15.822 **** 2025-09-18 00:55:09.799092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655', 'dm-uuid-LVM-gJumQCyZ1bfxhO0dfjPwEejz9ohnhr3d478wME9KHSsMPzezVqwZlBzRBck7giHw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d', 'dm-uuid-LVM-C4siaejQmTKzx2KcnmVAte27Kk5gro23PrOOSEKuHroY5CBUeLj0Jw30TjqZQgJ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b', 'dm-uuid-LVM-LU2WChruXwDGJXhDT4p35rNV8sSdVPmlIbCWPRkS3bJzeJa8OYo8vVFzIQsRVwrj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ff69fa-cd5e-473d-a298-9bc83966394f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799257 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4', 'dm-uuid-LVM-ej82I6MoUZWchGQS1y2ZyHJCdg8n8p3EWz8LoAAbpQlv51jBj80VxSmVjRuEteR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cde6920--619d--54be--8750--7c50463ca655-osd--block--0cde6920--619d--54be--8750--7c50463ca655'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-68Bm8z-3zKE-kRH3-9AQX-alhg-bxaz-2X4H8K', 'scsi-0QEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca', 'scsi-SQEMU_QEMU_HARDDISK_2e4d0087-2785-46b4-8f27-c306f0a9f7ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ac78a0a--4049--5f74--bf32--d6052d628b7d-osd--block--3ac78a0a--4049--5f74--bf32--d6052d628b7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-01LZMs-Pdh3-IDPz-xI2P-Fjst-4xgK-QgzMxM', 'scsi-0QEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738', 'scsi-SQEMU_QEMU_HARDDISK_b8995dd7-5ece-41d0-bfc6-34744a0d6738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799341 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97', 'scsi-SQEMU_QEMU_HARDDISK_5ef9a296-1867-4994-9a8d-f57ea224fa97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799413 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.799423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799464 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d275dfba-7189-46c6-ae83-21710451b98e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b959ef4--2353--55d9--9e37--ea43ed82416b-osd--block--7b959ef4--2353--55d9--9e37--ea43ed82416b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qdztzf-yJQx-6QsS-ue8y-VY8R-Ex68-Rs4ML0', 'scsi-0QEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8', 'scsi-SQEMU_QEMU_HARDDISK_514241af-fcf0-4d5f-9d7d-ad7f828482f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--652709a4--002d--5e7f--9b0a--9f9e264992f4-osd--block--652709a4--002d--5e7f--9b0a--9f9e264992f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lOA0Q4-7mCN-oUtO-k87H-t1uw-28O9-bDn4PP', 'scsi-0QEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31', 'scsi-SQEMU_QEMU_HARDDISK_2ccda686-b8eb-476a-b4c1-b925092fcf31'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5', 'dm-uuid-LVM-TH2vhzQ3frcs9a69TU5wE7rT1r26iytTTaI0d0Ks3AhpggiVBlIHs2kJM5ib59Hu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d', 'scsi-SQEMU_QEMU_HARDDISK_6ac3f343-eabf-4363-a559-72345c6aba0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0', 'dm-uuid-LVM-qycmhnh5qlb9tVSHUxG1t8mss4Ah6MDvAYLOSvYJYOvvz5TVq9e3dFYRGrXLqJpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799573 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.799583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d40cfdb9-09fe-4d78-8a8b-049e8e079a3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--07829316--95ed--5d0c--8777--c74850e385f5-osd--block--07829316--95ed--5d0c--8777--c74850e385f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TE10AL-7Csv-4u2G-ozSb-13Za-yvZs-KCadDL', 'scsi-0QEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739', 'scsi-SQEMU_QEMU_HARDDISK_79b9ef2b-0416-40aa-a8ac-6a91762c3739'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799704 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--48f1b2b0--1ebe--571e--b515--4e988bd235b0-osd--block--48f1b2b0--1ebe--571e--b515--4e988bd235b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jfe1uI-hXch-v6I9-89UP-ov5N-PxM2-Ar1e3o', 'scsi-0QEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd', 'scsi-SQEMU_QEMU_HARDDISK_df5980fc-abe4-45b8-a678-4af06952c2bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c', 'scsi-SQEMU_QEMU_HARDDISK_a477f58a-3bf1-4975-968c-c72809c2667c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 00:55:09.799745 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.799755 | orchestrator | 2025-09-18 00:55:09.799765 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-18 00:55:09.799775 | orchestrator | Thursday 18 September 2025 00:53:12 +0000 (0:00:00.591) 0:00:16.414 **** 2025-09-18 00:55:09.799785 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.799795 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.799804 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.799814 | orchestrator | 2025-09-18 00:55:09.799823 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-18 00:55:09.799833 | orchestrator | Thursday 18 September 2025 00:53:13 +0000 (0:00:00.723) 0:00:17.137 **** 2025-09-18 00:55:09.799843 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.799852 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.799862 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.799871 | orchestrator | 2025-09-18 00:55:09.799881 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 00:55:09.799891 | orchestrator | Thursday 18 September 2025 00:53:13 +0000 (0:00:00.470) 0:00:17.608 **** 2025-09-18 00:55:09.799900 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.799910 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.799919 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.799929 | orchestrator | 2025-09-18 00:55:09.799938 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 00:55:09.799948 | orchestrator | Thursday 18 September 2025 00:53:14 +0000 (0:00:00.709) 0:00:18.317 **** 2025-09-18 00:55:09.799958 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.799967 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.799977 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.799987 | orchestrator | 2025-09-18 00:55:09.799996 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 00:55:09.800010 | orchestrator | Thursday 18 September 2025 00:53:14 +0000 (0:00:00.303) 0:00:18.621 **** 2025-09-18 00:55:09.800020 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800029 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800039 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800048 | orchestrator | 2025-09-18 00:55:09.800058 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 00:55:09.800068 | orchestrator | Thursday 18 September 2025 00:53:15 +0000 (0:00:00.431) 0:00:19.052 **** 2025-09-18 00:55:09.800077 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800087 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800097 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800106 | orchestrator | 2025-09-18 00:55:09.800116 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-18 00:55:09.800125 | orchestrator | Thursday 18 September 2025 00:53:15 +0000 (0:00:00.534) 0:00:19.586 **** 2025-09-18 00:55:09.800135 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-18 00:55:09.800145 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-18 00:55:09.800154 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-18 00:55:09.800164 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-18 00:55:09.800173 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-18 00:55:09.800183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-18 00:55:09.800192 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-18 00:55:09.800207 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-18 00:55:09.800217 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-18 00:55:09.800226 | orchestrator | 2025-09-18 00:55:09.800236 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-18 00:55:09.800246 | orchestrator | Thursday 18 September 2025 00:53:16 +0000 (0:00:00.831) 0:00:20.418 **** 2025-09-18 00:55:09.800255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 00:55:09.800265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 00:55:09.800275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 00:55:09.800296 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 00:55:09.800316 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 00:55:09.800325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 00:55:09.800335 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 00:55:09.800354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 00:55:09.800363 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 00:55:09.800373 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800382 | orchestrator | 2025-09-18 00:55:09.800392 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-18 00:55:09.800402 | orchestrator | Thursday 18 September 2025 00:53:16 +0000 (0:00:00.361) 0:00:20.779 **** 2025-09-18 00:55:09.800411 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:55:09.800421 | orchestrator | 2025-09-18 00:55:09.800431 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 00:55:09.800441 | orchestrator | Thursday 18 September 2025 00:53:17 +0000 (0:00:00.781) 0:00:21.561 **** 2025-09-18 00:55:09.800451 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800460 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800470 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800479 | orchestrator | 2025-09-18 00:55:09.800494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 00:55:09.800504 | orchestrator | Thursday 18 September 2025 00:53:18 +0000 (0:00:00.340) 0:00:21.902 **** 2025-09-18 00:55:09.800514 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800523 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800532 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800542 | orchestrator | 2025-09-18 00:55:09.800552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 00:55:09.800561 | orchestrator | Thursday 18 September 2025 00:53:18 +0000 (0:00:00.311) 0:00:22.213 **** 2025-09-18 00:55:09.800571 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800581 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.800590 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:55:09.800600 | orchestrator | 2025-09-18 00:55:09.800609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 00:55:09.800619 | orchestrator | Thursday 18 September 2025 00:53:18 +0000 (0:00:00.347) 0:00:22.561 **** 2025-09-18 00:55:09.800629 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.800638 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.800648 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.800658 | orchestrator | 2025-09-18 00:55:09.800667 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 00:55:09.800677 | orchestrator | Thursday 18 September 2025 00:53:19 +0000 (0:00:00.615) 0:00:23.176 **** 2025-09-18 00:55:09.800686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:55:09.800696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:55:09.800711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:55:09.800721 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800731 | orchestrator | 2025-09-18 00:55:09.800740 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 00:55:09.800750 | orchestrator | Thursday 18 September 2025 00:53:19 +0000 (0:00:00.388) 0:00:23.564 **** 2025-09-18 00:55:09.800759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:55:09.800769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:55:09.800785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:55:09.800795 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800805 | orchestrator | 2025-09-18 00:55:09.800814 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 00:55:09.800824 | orchestrator | Thursday 18 September 2025 00:53:20 +0000 (0:00:00.374) 0:00:23.939 **** 2025-09-18 00:55:09.800834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 00:55:09.800843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 00:55:09.800853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 00:55:09.800862 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.800872 | orchestrator | 2025-09-18 00:55:09.800881 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 00:55:09.800891 | orchestrator | Thursday 18 September 2025 00:53:20 +0000 (0:00:00.402) 0:00:24.342 **** 2025-09-18 00:55:09.800901 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:55:09.800910 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:55:09.800920 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:55:09.800929 | orchestrator | 2025-09-18 00:55:09.800939 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 00:55:09.800949 | orchestrator | Thursday 18 September 2025 00:53:20 +0000 (0:00:00.346) 0:00:24.688 **** 2025-09-18 00:55:09.800958 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 00:55:09.800968 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 00:55:09.800977 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 00:55:09.800987 | orchestrator | 2025-09-18 00:55:09.800996 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-18 00:55:09.801006 | orchestrator | Thursday 18 September 2025 00:53:21 +0000 (0:00:00.511) 0:00:25.200 **** 2025-09-18 00:55:09.801016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:55:09.801025 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:55:09.801035 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:55:09.801044 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 00:55:09.801054 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 00:55:09.801063 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 00:55:09.801073 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 00:55:09.801083 | orchestrator | 2025-09-18 00:55:09.801093 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-18 00:55:09.801102 | orchestrator | Thursday 18 September 2025 00:53:22 +0000 (0:00:01.004) 0:00:26.204 **** 2025-09-18 00:55:09.801112 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 00:55:09.801121 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 00:55:09.801131 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 00:55:09.801140 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 00:55:09.801150 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 00:55:09.801165 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 00:55:09.801174 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 00:55:09.801184 | orchestrator | 2025-09-18 00:55:09.801198 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-18 00:55:09.801208 | orchestrator | Thursday 18 September 2025 00:53:24 +0000 (0:00:02.020) 0:00:28.225 **** 2025-09-18 00:55:09.801217 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:55:09.801227 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:55:09.801236 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-18 00:55:09.801246 | orchestrator | 2025-09-18 00:55:09.801255 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-18 00:55:09.801265 | orchestrator | Thursday 18 September 2025 00:53:24 +0000 (0:00:00.408) 0:00:28.634 **** 2025-09-18 00:55:09.801275 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:55:09.801322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:55:09.801333 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:55:09.801343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:55:09.801353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 00:55:09.801363 | orchestrator | 2025-09-18 00:55:09.801372 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-18 00:55:09.801415 | orchestrator | Thursday 18 September 2025 00:54:11 +0000 (0:00:47.111) 0:01:15.746 **** 2025-09-18 00:55:09.801426 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801445 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801455 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801464 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801480 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-18 00:55:09.801488 | orchestrator | 2025-09-18 00:55:09.801496 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-18 00:55:09.801504 | orchestrator | Thursday 18 September 2025 00:54:36 +0000 (0:00:24.911) 0:01:40.658 **** 2025-09-18 00:55:09.801512 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801525 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801533 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801541 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801548 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801556 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801564 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 00:55:09.801572 | orchestrator | 2025-09-18 00:55:09.801580 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-18 00:55:09.801588 | orchestrator | Thursday 18 September 2025 00:54:49 +0000 (0:00:12.589) 0:01:53.247 **** 2025-09-18 00:55:09.801596 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801604 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801611 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801619 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801627 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801635 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801648 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801656 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801664 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801679 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801687 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801695 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801703 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801711 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 00:55:09.801726 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 00:55:09.801734 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 00:55:09.801742 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-18 00:55:09.801750 | orchestrator | 2025-09-18 00:55:09.801758 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:55:09.801766 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-18 00:55:09.801775 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 00:55:09.801786 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 00:55:09.801794 | orchestrator | 2025-09-18 00:55:09.801802 | orchestrator | 2025-09-18 00:55:09.801810 | orchestrator | 2025-09-18 00:55:09.801817 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:55:09.801825 | orchestrator | Thursday 18 September 2025 00:55:07 +0000 (0:00:18.361) 0:02:11.609 **** 2025-09-18 00:55:09.801833 | orchestrator | =============================================================================== 2025-09-18 00:55:09.801846 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.11s 2025-09-18 00:55:09.801854 | orchestrator | generate keys ---------------------------------------------------------- 24.91s 2025-09-18 00:55:09.801862 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.36s 2025-09-18 00:55:09.801870 | orchestrator | get keys from monitors ------------------------------------------------- 12.59s 2025-09-18 00:55:09.801878 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.07s 2025-09-18 00:55:09.801886 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.02s 2025-09-18 00:55:09.801894 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2025-09-18 00:55:09.801901 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2025-09-18 00:55:09.801909 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2025-09-18 00:55:09.801917 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2025-09-18 00:55:09.801925 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.78s 2025-09-18 00:55:09.801933 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-09-18 00:55:09.801941 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-09-18 00:55:09.801949 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2025-09-18 00:55:09.801956 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-09-18 00:55:09.801964 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-09-18 00:55:09.801972 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-09-18 00:55:09.801980 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.58s 2025-09-18 00:55:09.801988 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2025-09-18 00:55:09.801995 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.54s 2025-09-18 00:55:09.802003 | orchestrator | 2025-09-18 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:12.840385 | orchestrator | 2025-09-18 00:55:12 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:12.841990 | orchestrator | 2025-09-18 00:55:12 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:12.843660 | orchestrator | 2025-09-18 00:55:12 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:12.843686 | orchestrator | 2025-09-18 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:15.883986 | orchestrator | 2025-09-18 00:55:15 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:15.884626 | orchestrator | 2025-09-18 00:55:15 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:15.885073 | orchestrator | 2025-09-18 00:55:15 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:15.885097 | orchestrator | 2025-09-18 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:18.928618 | orchestrator | 2025-09-18 00:55:18 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:18.929802 | orchestrator | 2025-09-18 00:55:18 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:18.932776 | orchestrator | 2025-09-18 00:55:18 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:18.934122 | orchestrator | 2025-09-18 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:21.978965 | orchestrator | 2025-09-18 00:55:21 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:21.981043 | orchestrator | 2025-09-18 00:55:21 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:21.983818 | orchestrator | 2025-09-18 00:55:21 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:21.984446 | orchestrator | 2025-09-18 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:25.034263 | orchestrator | 2025-09-18 00:55:25 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:25.035862 | orchestrator | 2025-09-18 00:55:25 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:25.038559 | orchestrator | 2025-09-18 00:55:25 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:25.038589 | orchestrator | 2025-09-18 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:28.083204 | orchestrator | 2025-09-18 00:55:28 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:28.084465 | orchestrator | 2025-09-18 00:55:28 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:28.087114 | orchestrator | 2025-09-18 00:55:28 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:28.087138 | orchestrator | 2025-09-18 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:31.123488 | orchestrator | 2025-09-18 00:55:31 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:31.124993 | orchestrator | 2025-09-18 00:55:31 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:31.126439 | orchestrator | 2025-09-18 00:55:31 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:31.126460 | orchestrator | 2025-09-18 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:34.180938 | orchestrator | 2025-09-18 00:55:34 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:34.183414 | orchestrator | 2025-09-18 00:55:34 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state STARTED 2025-09-18 00:55:34.184686 | orchestrator | 2025-09-18 00:55:34 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:34.184879 | orchestrator | 2025-09-18 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:37.260944 | orchestrator | 2025-09-18 00:55:37 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:37.261156 | orchestrator | 2025-09-18 00:55:37 | INFO  | Task 94d95f97-c192-4db8-a29d-91703f4c584d is in state SUCCESS 2025-09-18 00:55:37.261188 | orchestrator | 2025-09-18 00:55:37 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:37.261200 | orchestrator | 2025-09-18 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:40.309822 | orchestrator | 2025-09-18 00:55:40 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:40.313711 | orchestrator | 2025-09-18 00:55:40 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:40.316501 | orchestrator | 2025-09-18 00:55:40 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:40.317427 | orchestrator | 2025-09-18 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:43.362106 | orchestrator | 2025-09-18 00:55:43 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:43.365080 | orchestrator | 2025-09-18 00:55:43 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state STARTED 2025-09-18 00:55:43.369109 | orchestrator | 2025-09-18 00:55:43 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:43.369157 | orchestrator | 2025-09-18 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:46.419085 | orchestrator | 2025-09-18 00:55:46 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:46.421800 | orchestrator | 2025-09-18 00:55:46 | INFO  | Task 90473857-8efb-46ad-8c29-225166c96a0b is in state SUCCESS 2025-09-18 00:55:46.423944 | orchestrator | 2025-09-18 00:55:46.423995 | orchestrator | 2025-09-18 00:55:46.424009 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-18 00:55:46.424021 | orchestrator | 2025-09-18 00:55:46.424032 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-18 00:55:46.424347 | orchestrator | Thursday 18 September 2025 00:55:11 +0000 (0:00:00.144) 0:00:00.144 **** 2025-09-18 00:55:46.424367 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-18 00:55:46.424380 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424391 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424402 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 00:55:46.424413 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-18 00:55:46.424452 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-18 00:55:46.424560 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-18 00:55:46.424575 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-18 00:55:46.424585 | orchestrator | 2025-09-18 00:55:46.424596 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-18 00:55:46.424608 | orchestrator | Thursday 18 September 2025 00:55:15 +0000 (0:00:04.208) 0:00:04.353 **** 2025-09-18 00:55:46.424619 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 00:55:46.424631 | orchestrator | 2025-09-18 00:55:46.424642 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-18 00:55:46.424653 | orchestrator | Thursday 18 September 2025 00:55:16 +0000 (0:00:00.933) 0:00:05.287 **** 2025-09-18 00:55:46.424665 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-18 00:55:46.424676 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424687 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424698 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 00:55:46.424709 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424720 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-18 00:55:46.424731 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-18 00:55:46.424742 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-18 00:55:46.424753 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-18 00:55:46.424764 | orchestrator | 2025-09-18 00:55:46.424775 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-18 00:55:46.424811 | orchestrator | Thursday 18 September 2025 00:55:29 +0000 (0:00:12.974) 0:00:18.261 **** 2025-09-18 00:55:46.424823 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-18 00:55:46.424834 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424844 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424855 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 00:55:46.424866 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 00:55:46.424877 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-18 00:55:46.424887 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-18 00:55:46.424898 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-18 00:55:46.424909 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-18 00:55:46.424919 | orchestrator | 2025-09-18 00:55:46.424930 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:55:46.424941 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:55:46.424953 | orchestrator | 2025-09-18 00:55:46.424964 | orchestrator | 2025-09-18 00:55:46.424975 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:55:46.424986 | orchestrator | Thursday 18 September 2025 00:55:36 +0000 (0:00:06.756) 0:00:25.018 **** 2025-09-18 00:55:46.424996 | orchestrator | =============================================================================== 2025-09-18 00:55:46.425007 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.97s 2025-09-18 00:55:46.425018 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.76s 2025-09-18 00:55:46.425029 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-09-18 00:55:46.425040 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2025-09-18 00:55:46.425051 | orchestrator | 2025-09-18 00:55:46.425062 | orchestrator | 2025-09-18 00:55:46.425073 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:55:46.425084 | orchestrator | 2025-09-18 00:55:46.425109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:55:46.425121 | orchestrator | Thursday 18 September 2025 00:54:00 +0000 (0:00:00.271) 0:00:00.271 **** 2025-09-18 00:55:46.425133 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.425145 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.425156 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.425168 | orchestrator | 2025-09-18 00:55:46.425180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:55:46.425194 | orchestrator | Thursday 18 September 2025 00:54:01 +0000 (0:00:00.309) 0:00:00.580 **** 2025-09-18 00:55:46.425207 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-18 00:55:46.425221 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-18 00:55:46.425235 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-18 00:55:46.425248 | orchestrator | 2025-09-18 00:55:46.425261 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-18 00:55:46.425297 | orchestrator | 2025-09-18 00:55:46.425309 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 00:55:46.425320 | orchestrator | Thursday 18 September 2025 00:54:01 +0000 (0:00:00.434) 0:00:01.015 **** 2025-09-18 00:55:46.425338 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:55:46.425349 | orchestrator | 2025-09-18 00:55:46.425360 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-18 00:55:46.425370 | orchestrator | Thursday 18 September 2025 00:54:02 +0000 (0:00:00.535) 0:00:01.551 **** 2025-09-18 00:55:46.425395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.425430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.425451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.425464 | orchestrator | 2025-09-18 00:55:46.425475 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-18 00:55:46.425486 | orchestrator | Thursday 18 September 2025 00:54:03 +0000 (0:00:01.074) 0:00:02.626 **** 2025-09-18 00:55:46.425497 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.425507 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.425518 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.425529 | orchestrator | 2025-09-18 00:55:46.425540 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 00:55:46.425551 | orchestrator | Thursday 18 September 2025 00:54:03 +0000 (0:00:00.480) 0:00:03.107 **** 2025-09-18 00:55:46.425561 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 00:55:46.425572 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 00:55:46.425588 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 00:55:46.425600 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 00:55:46.425611 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 00:55:46.425621 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 00:55:46.425632 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-18 00:55:46.425643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 00:55:46.425660 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 00:55:46.425671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 00:55:46.425681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 00:55:46.425692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 00:55:46.425707 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 00:55:46.425718 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 00:55:46.425729 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-18 00:55:46.425740 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 00:55:46.425751 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 00:55:46.425761 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 00:55:46.425772 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 00:55:46.425783 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 00:55:46.425793 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 00:55:46.425804 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 00:55:46.425815 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-18 00:55:46.425825 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 00:55:46.425837 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-18 00:55:46.425849 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-18 00:55:46.425860 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-18 00:55:46.425871 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-18 00:55:46.425882 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-18 00:55:46.425892 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-18 00:55:46.425903 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-18 00:55:46.425914 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-18 00:55:46.425925 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-18 00:55:46.425936 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-18 00:55:46.425946 | orchestrator | 2025-09-18 00:55:46.425957 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.425968 | orchestrator | Thursday 18 September 2025 00:54:04 +0000 (0:00:00.825) 0:00:03.932 **** 2025-09-18 00:55:46.425979 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.425995 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426006 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426065 | orchestrator | 2025-09-18 00:55:46.426079 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.426090 | orchestrator | Thursday 18 September 2025 00:54:04 +0000 (0:00:00.342) 0:00:04.274 **** 2025-09-18 00:55:46.426101 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426112 | orchestrator | 2025-09-18 00:55:46.426123 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.426140 | orchestrator | Thursday 18 September 2025 00:54:04 +0000 (0:00:00.134) 0:00:04.408 **** 2025-09-18 00:55:46.426151 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426162 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.426173 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.426183 | orchestrator | 2025-09-18 00:55:46.426194 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.426205 | orchestrator | Thursday 18 September 2025 00:54:05 +0000 (0:00:00.508) 0:00:04.917 **** 2025-09-18 00:55:46.426216 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.426227 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426237 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426248 | orchestrator | 2025-09-18 00:55:46.426259 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.426270 | orchestrator | Thursday 18 September 2025 00:54:05 +0000 (0:00:00.358) 0:00:05.275 **** 2025-09-18 00:55:46.426313 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426324 | orchestrator | 2025-09-18 00:55:46.426335 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.426346 | orchestrator | Thursday 18 September 2025 00:54:05 +0000 (0:00:00.130) 0:00:05.406 **** 2025-09-18 00:55:46.426357 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426367 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.426378 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.426388 | orchestrator | 2025-09-18 00:55:46.426405 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.426416 | orchestrator | Thursday 18 September 2025 00:54:06 +0000 (0:00:00.305) 0:00:05.711 **** 2025-09-18 00:55:46.426426 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.426437 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426448 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426459 | orchestrator | 2025-09-18 00:55:46.426470 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.426481 | orchestrator | Thursday 18 September 2025 00:54:06 +0000 (0:00:00.308) 0:00:06.019 **** 2025-09-18 00:55:46.426491 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426502 | orchestrator | 2025-09-18 00:55:46.426513 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.426523 | orchestrator | Thursday 18 September 2025 00:54:06 +0000 (0:00:00.129) 0:00:06.149 **** 2025-09-18 00:55:46.426534 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426545 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.426555 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.426566 | orchestrator | 2025-09-18 00:55:46.426577 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.426588 | orchestrator | Thursday 18 September 2025 00:54:07 +0000 (0:00:00.558) 0:00:06.708 **** 2025-09-18 00:55:46.426599 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.426609 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426620 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426631 | orchestrator | 2025-09-18 00:55:46.426642 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.426652 | orchestrator | Thursday 18 September 2025 00:54:07 +0000 (0:00:00.329) 0:00:07.038 **** 2025-09-18 00:55:46.426663 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426674 | orchestrator | 2025-09-18 00:55:46.426684 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.426703 | orchestrator | Thursday 18 September 2025 00:54:07 +0000 (0:00:00.148) 0:00:07.186 **** 2025-09-18 00:55:46.426714 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426725 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.426735 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.426746 | orchestrator | 2025-09-18 00:55:46.426757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.426768 | orchestrator | Thursday 18 September 2025 00:54:07 +0000 (0:00:00.315) 0:00:07.502 **** 2025-09-18 00:55:46.426778 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.426789 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426800 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426810 | orchestrator | 2025-09-18 00:55:46.426821 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.426832 | orchestrator | Thursday 18 September 2025 00:54:08 +0000 (0:00:00.312) 0:00:07.814 **** 2025-09-18 00:55:46.426842 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426853 | orchestrator | 2025-09-18 00:55:46.426864 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.426875 | orchestrator | Thursday 18 September 2025 00:54:08 +0000 (0:00:00.306) 0:00:08.121 **** 2025-09-18 00:55:46.426886 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.426896 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.426907 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.426918 | orchestrator | 2025-09-18 00:55:46.426928 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.426939 | orchestrator | Thursday 18 September 2025 00:54:08 +0000 (0:00:00.314) 0:00:08.435 **** 2025-09-18 00:55:46.426950 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.426961 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.426972 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.426982 | orchestrator | 2025-09-18 00:55:46.426993 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.427004 | orchestrator | Thursday 18 September 2025 00:54:09 +0000 (0:00:00.361) 0:00:08.796 **** 2025-09-18 00:55:46.427015 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427025 | orchestrator | 2025-09-18 00:55:46.427036 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.427047 | orchestrator | Thursday 18 September 2025 00:54:09 +0000 (0:00:00.133) 0:00:08.930 **** 2025-09-18 00:55:46.427058 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427068 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.427079 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.427090 | orchestrator | 2025-09-18 00:55:46.427101 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.427111 | orchestrator | Thursday 18 September 2025 00:54:09 +0000 (0:00:00.294) 0:00:09.224 **** 2025-09-18 00:55:46.427122 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.427133 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.427144 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.427155 | orchestrator | 2025-09-18 00:55:46.427171 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.427182 | orchestrator | Thursday 18 September 2025 00:54:10 +0000 (0:00:00.529) 0:00:09.754 **** 2025-09-18 00:55:46.427193 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427204 | orchestrator | 2025-09-18 00:55:46.427214 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.427225 | orchestrator | Thursday 18 September 2025 00:54:10 +0000 (0:00:00.156) 0:00:09.911 **** 2025-09-18 00:55:46.427236 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427247 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.427257 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.427268 | orchestrator | 2025-09-18 00:55:46.427332 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.427351 | orchestrator | Thursday 18 September 2025 00:54:10 +0000 (0:00:00.291) 0:00:10.203 **** 2025-09-18 00:55:46.427361 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.427372 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.427383 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.427394 | orchestrator | 2025-09-18 00:55:46.427404 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.427415 | orchestrator | Thursday 18 September 2025 00:54:10 +0000 (0:00:00.324) 0:00:10.527 **** 2025-09-18 00:55:46.427432 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427443 | orchestrator | 2025-09-18 00:55:46.427454 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.427465 | orchestrator | Thursday 18 September 2025 00:54:11 +0000 (0:00:00.128) 0:00:10.655 **** 2025-09-18 00:55:46.427475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427486 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.427497 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.427507 | orchestrator | 2025-09-18 00:55:46.427518 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.427529 | orchestrator | Thursday 18 September 2025 00:54:11 +0000 (0:00:00.301) 0:00:10.957 **** 2025-09-18 00:55:46.427540 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.427550 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.427561 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.427570 | orchestrator | 2025-09-18 00:55:46.427580 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.427590 | orchestrator | Thursday 18 September 2025 00:54:12 +0000 (0:00:00.655) 0:00:11.612 **** 2025-09-18 00:55:46.427599 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427609 | orchestrator | 2025-09-18 00:55:46.427618 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.427628 | orchestrator | Thursday 18 September 2025 00:54:12 +0000 (0:00:00.154) 0:00:11.766 **** 2025-09-18 00:55:46.427637 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427647 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.427656 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.427666 | orchestrator | 2025-09-18 00:55:46.427675 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 00:55:46.427685 | orchestrator | Thursday 18 September 2025 00:54:12 +0000 (0:00:00.298) 0:00:12.065 **** 2025-09-18 00:55:46.427694 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:55:46.427704 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:55:46.427714 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:55:46.427723 | orchestrator | 2025-09-18 00:55:46.427733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 00:55:46.427742 | orchestrator | Thursday 18 September 2025 00:54:12 +0000 (0:00:00.326) 0:00:12.391 **** 2025-09-18 00:55:46.427752 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427761 | orchestrator | 2025-09-18 00:55:46.427771 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 00:55:46.427780 | orchestrator | Thursday 18 September 2025 00:54:12 +0000 (0:00:00.127) 0:00:12.519 **** 2025-09-18 00:55:46.427790 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.427800 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.427809 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.427818 | orchestrator | 2025-09-18 00:55:46.427828 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-18 00:55:46.427837 | orchestrator | Thursday 18 September 2025 00:54:13 +0000 (0:00:00.501) 0:00:13.021 **** 2025-09-18 00:55:46.427847 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:55:46.427856 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:55:46.427866 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:55:46.427875 | orchestrator | 2025-09-18 00:55:46.427885 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-18 00:55:46.427901 | orchestrator | Thursday 18 September 2025 00:54:15 +0000 (0:00:01.731) 0:00:14.752 **** 2025-09-18 00:55:46.427910 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 00:55:46.427920 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 00:55:46.427930 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 00:55:46.427939 | orchestrator | 2025-09-18 00:55:46.427949 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-18 00:55:46.427958 | orchestrator | Thursday 18 September 2025 00:54:17 +0000 (0:00:02.176) 0:00:16.928 **** 2025-09-18 00:55:46.427968 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 00:55:46.427977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 00:55:46.427987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 00:55:46.427997 | orchestrator | 2025-09-18 00:55:46.428006 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-18 00:55:46.428016 | orchestrator | Thursday 18 September 2025 00:54:19 +0000 (0:00:02.355) 0:00:19.284 **** 2025-09-18 00:55:46.428031 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 00:55:46.428041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 00:55:46.428051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 00:55:46.428061 | orchestrator | 2025-09-18 00:55:46.428070 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-18 00:55:46.428080 | orchestrator | Thursday 18 September 2025 00:54:21 +0000 (0:00:02.130) 0:00:21.414 **** 2025-09-18 00:55:46.428089 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.428099 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.428108 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.428118 | orchestrator | 2025-09-18 00:55:46.428127 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-18 00:55:46.428137 | orchestrator | Thursday 18 September 2025 00:54:22 +0000 (0:00:00.347) 0:00:21.762 **** 2025-09-18 00:55:46.428146 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.428156 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.428165 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.428174 | orchestrator | 2025-09-18 00:55:46.428193 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 00:55:46.428203 | orchestrator | Thursday 18 September 2025 00:54:22 +0000 (0:00:00.314) 0:00:22.076 **** 2025-09-18 00:55:46.428213 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:55:46.428222 | orchestrator | 2025-09-18 00:55:46.428232 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-18 00:55:46.428241 | orchestrator | Thursday 18 September 2025 00:54:23 +0000 (0:00:00.571) 0:00:22.648 **** 2025-09-18 00:55:46.428252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428329 | orchestrator | 2025-09-18 00:55:46.428339 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-18 00:55:46.428349 | orchestrator | Thursday 18 September 2025 00:54:25 +0000 (0:00:01.965) 0:00:24.614 **** 2025-09-18 00:55:46.428373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428392 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.428403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428418 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.428435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428452 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.428462 | orchestrator | 2025-09-18 00:55:46.428471 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-18 00:55:46.428481 | orchestrator | Thursday 18 September 2025 00:54:25 +0000 (0:00:00.678) 0:00:25.292 **** 2025-09-18 00:55:46.428498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428509 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.428525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428542 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.428559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 00:55:46.428570 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.428579 | orchestrator | 2025-09-18 00:55:46.428589 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-18 00:55:46.428603 | orchestrator | Thursday 18 September 2025 00:54:26 +0000 (0:00:00.884) 0:00:26.176 **** 2025-09-18 00:55:46.428614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 00:55:46.428672 | orchestrator | 2025-09-18 00:55:46.428681 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 00:55:46.428691 | orchestrator | Thursday 18 September 2025 00:54:28 +0000 (0:00:01.497) 0:00:27.674 **** 2025-09-18 00:55:46.428701 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:55:46.428710 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:55:46.428720 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:55:46.428729 | orchestrator | 2025-09-18 00:55:46.428739 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 00:55:46.428749 | orchestrator | Thursday 18 September 2025 00:54:28 +0000 (0:00:00.289) 0:00:27.963 **** 2025-09-18 00:55:46.428759 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:55:46.428768 | orchestrator | 2025-09-18 00:55:46.428778 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-18 00:55:46.428787 | orchestrator | Thursday 18 September 2025 00:54:28 +0000 (0:00:00.542) 0:00:28.505 **** 2025-09-18 00:55:46.428797 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:55:46.428807 | orchestrator | 2025-09-18 00:55:46.428821 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-18 00:55:46.428831 | orchestrator | Thursday 18 September 2025 00:54:31 +0000 (0:00:02.365) 0:00:30.871 **** 2025-09-18 00:55:46.428840 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:55:46.428849 | orchestrator | 2025-09-18 00:55:46.428859 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-18 00:55:46.428869 | orchestrator | Thursday 18 September 2025 00:54:34 +0000 (0:00:02.711) 0:00:33.582 **** 2025-09-18 00:55:46.428878 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:55:46.428888 | orchestrator | 2025-09-18 00:55:46.428897 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 00:55:46.428907 | orchestrator | Thursday 18 September 2025 00:54:50 +0000 (0:00:16.296) 0:00:49.884 **** 2025-09-18 00:55:46.428925 | orchestrator | 2025-09-18 00:55:46.428935 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 00:55:46.428945 | orchestrator | Thursday 18 September 2025 00:54:50 +0000 (0:00:00.082) 0:00:49.967 **** 2025-09-18 00:55:46.428954 | orchestrator | 2025-09-18 00:55:46.428964 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 00:55:46.428973 | orchestrator | Thursday 18 September 2025 00:54:50 +0000 (0:00:00.067) 0:00:50.034 **** 2025-09-18 00:55:46.428983 | orchestrator | 2025-09-18 00:55:46.428999 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-18 00:55:46.429009 | orchestrator | Thursday 18 September 2025 00:54:50 +0000 (0:00:00.068) 0:00:50.103 **** 2025-09-18 00:55:46.429019 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:55:46.429028 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:55:46.429038 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:55:46.429048 | orchestrator | 2025-09-18 00:55:46.429057 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:55:46.429067 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-18 00:55:46.429077 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-18 00:55:46.429087 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-18 00:55:46.429097 | orchestrator | 2025-09-18 00:55:46.429106 | orchestrator | 2025-09-18 00:55:46.429116 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:55:46.429125 | orchestrator | Thursday 18 September 2025 00:55:45 +0000 (0:00:54.683) 0:01:44.787 **** 2025-09-18 00:55:46.429135 | orchestrator | =============================================================================== 2025-09-18 00:55:46.429145 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.68s 2025-09-18 00:55:46.429154 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.30s 2025-09-18 00:55:46.429163 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.72s 2025-09-18 00:55:46.429173 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.37s 2025-09-18 00:55:46.429182 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.36s 2025-09-18 00:55:46.429192 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.18s 2025-09-18 00:55:46.429201 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.13s 2025-09-18 00:55:46.429211 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2025-09-18 00:55:46.429220 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.73s 2025-09-18 00:55:46.429230 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.50s 2025-09-18 00:55:46.429239 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.07s 2025-09-18 00:55:46.429249 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2025-09-18 00:55:46.429259 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-09-18 00:55:46.429268 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.68s 2025-09-18 00:55:46.429294 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2025-09-18 00:55:46.429304 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-09-18 00:55:46.429313 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-09-18 00:55:46.429323 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-18 00:55:46.429332 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-18 00:55:46.429348 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-18 00:55:46.429358 | orchestrator | 2025-09-18 00:55:46 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:46.429367 | orchestrator | 2025-09-18 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:49.487571 | orchestrator | 2025-09-18 00:55:49 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:49.488912 | orchestrator | 2025-09-18 00:55:49 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:49.488950 | orchestrator | 2025-09-18 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:52.529440 | orchestrator | 2025-09-18 00:55:52 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:52.530959 | orchestrator | 2025-09-18 00:55:52 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:52.530993 | orchestrator | 2025-09-18 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:55.572247 | orchestrator | 2025-09-18 00:55:55 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:55.573737 | orchestrator | 2025-09-18 00:55:55 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:55.573778 | orchestrator | 2025-09-18 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:55:58.619181 | orchestrator | 2025-09-18 00:55:58 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:55:58.621310 | orchestrator | 2025-09-18 00:55:58 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:55:58.621349 | orchestrator | 2025-09-18 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:01.657332 | orchestrator | 2025-09-18 00:56:01 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:01.659648 | orchestrator | 2025-09-18 00:56:01 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:01.659681 | orchestrator | 2025-09-18 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:04.709391 | orchestrator | 2025-09-18 00:56:04 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:04.711268 | orchestrator | 2025-09-18 00:56:04 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:04.711331 | orchestrator | 2025-09-18 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:07.751466 | orchestrator | 2025-09-18 00:56:07 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:07.753423 | orchestrator | 2025-09-18 00:56:07 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:07.753737 | orchestrator | 2025-09-18 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:10.800190 | orchestrator | 2025-09-18 00:56:10 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:10.802069 | orchestrator | 2025-09-18 00:56:10 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:10.802125 | orchestrator | 2025-09-18 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:13.846930 | orchestrator | 2025-09-18 00:56:13 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:13.848220 | orchestrator | 2025-09-18 00:56:13 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:13.848432 | orchestrator | 2025-09-18 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:16.892185 | orchestrator | 2025-09-18 00:56:16 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:16.892542 | orchestrator | 2025-09-18 00:56:16 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:16.892576 | orchestrator | 2025-09-18 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:19.940842 | orchestrator | 2025-09-18 00:56:19 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:19.943072 | orchestrator | 2025-09-18 00:56:19 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:19.943111 | orchestrator | 2025-09-18 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:22.990926 | orchestrator | 2025-09-18 00:56:22 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:22.992498 | orchestrator | 2025-09-18 00:56:22 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:22.993188 | orchestrator | 2025-09-18 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:26.046304 | orchestrator | 2025-09-18 00:56:26 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:26.047479 | orchestrator | 2025-09-18 00:56:26 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:26.047512 | orchestrator | 2025-09-18 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:29.103279 | orchestrator | 2025-09-18 00:56:29 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:29.104170 | orchestrator | 2025-09-18 00:56:29 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:29.104261 | orchestrator | 2025-09-18 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:32.140785 | orchestrator | 2025-09-18 00:56:32 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:32.141920 | orchestrator | 2025-09-18 00:56:32 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state STARTED 2025-09-18 00:56:32.141953 | orchestrator | 2025-09-18 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:35.189461 | orchestrator | 2025-09-18 00:56:35 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:35.190796 | orchestrator | 2025-09-18 00:56:35 | INFO  | Task 77340434-31cb-49c4-8c73-72fd706f3637 is in state STARTED 2025-09-18 00:56:35.192330 | orchestrator | 2025-09-18 00:56:35 | INFO  | Task 5bbffa99-1eb0-4d0a-a864-10defc864bd2 is in state SUCCESS 2025-09-18 00:56:35.194890 | orchestrator | 2025-09-18 00:56:35 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:35.195896 | orchestrator | 2025-09-18 00:56:35 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:35.195919 | orchestrator | 2025-09-18 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:38.231374 | orchestrator | 2025-09-18 00:56:38 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:38.232075 | orchestrator | 2025-09-18 00:56:38 | INFO  | Task 77340434-31cb-49c4-8c73-72fd706f3637 is in state STARTED 2025-09-18 00:56:38.233017 | orchestrator | 2025-09-18 00:56:38 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:38.233827 | orchestrator | 2025-09-18 00:56:38 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:38.234102 | orchestrator | 2025-09-18 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:41.272916 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:41.273012 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:41.273037 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:41.273049 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task 77340434-31cb-49c4-8c73-72fd706f3637 is in state SUCCESS 2025-09-18 00:56:41.273060 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:41.273071 | orchestrator | 2025-09-18 00:56:41 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:41.273082 | orchestrator | 2025-09-18 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:44.301367 | orchestrator | 2025-09-18 00:56:44 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:44.302127 | orchestrator | 2025-09-18 00:56:44 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:44.304572 | orchestrator | 2025-09-18 00:56:44 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:44.305784 | orchestrator | 2025-09-18 00:56:44 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:44.306796 | orchestrator | 2025-09-18 00:56:44 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:44.307002 | orchestrator | 2025-09-18 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:47.355955 | orchestrator | 2025-09-18 00:56:47 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:47.356704 | orchestrator | 2025-09-18 00:56:47 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:47.358352 | orchestrator | 2025-09-18 00:56:47 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state STARTED 2025-09-18 00:56:47.359132 | orchestrator | 2025-09-18 00:56:47 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:47.359276 | orchestrator | 2025-09-18 00:56:47 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:47.359714 | orchestrator | 2025-09-18 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:50.389254 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:50.390887 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:50.393965 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task a2eb3381-9914-456d-84e5-7767cef1ceda is in state SUCCESS 2025-09-18 00:56:50.394642 | orchestrator | 2025-09-18 00:56:50.394673 | orchestrator | 2025-09-18 00:56:50.394686 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-18 00:56:50.394698 | orchestrator | 2025-09-18 00:56:50.394709 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-18 00:56:50.394721 | orchestrator | Thursday 18 September 2025 00:55:41 +0000 (0:00:00.231) 0:00:00.231 **** 2025-09-18 00:56:50.394732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-18 00:56:50.394744 | orchestrator | 2025-09-18 00:56:50.394755 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-18 00:56:50.394808 | orchestrator | Thursday 18 September 2025 00:55:41 +0000 (0:00:00.234) 0:00:00.466 **** 2025-09-18 00:56:50.394821 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-18 00:56:50.394832 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-18 00:56:50.394844 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-18 00:56:50.394855 | orchestrator | 2025-09-18 00:56:50.394865 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-18 00:56:50.394876 | orchestrator | Thursday 18 September 2025 00:55:42 +0000 (0:00:01.192) 0:00:01.658 **** 2025-09-18 00:56:50.394887 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-18 00:56:50.394898 | orchestrator | 2025-09-18 00:56:50.394908 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-18 00:56:50.394919 | orchestrator | Thursday 18 September 2025 00:55:43 +0000 (0:00:01.161) 0:00:02.820 **** 2025-09-18 00:56:50.394930 | orchestrator | changed: [testbed-manager] 2025-09-18 00:56:50.394941 | orchestrator | 2025-09-18 00:56:50.394952 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-18 00:56:50.394962 | orchestrator | Thursday 18 September 2025 00:55:44 +0000 (0:00:01.050) 0:00:03.871 **** 2025-09-18 00:56:50.394973 | orchestrator | changed: [testbed-manager] 2025-09-18 00:56:50.394984 | orchestrator | 2025-09-18 00:56:50.394995 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-18 00:56:50.395006 | orchestrator | Thursday 18 September 2025 00:55:45 +0000 (0:00:00.903) 0:00:04.774 **** 2025-09-18 00:56:50.395017 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-18 00:56:50.395027 | orchestrator | ok: [testbed-manager] 2025-09-18 00:56:50.395038 | orchestrator | 2025-09-18 00:56:50.395049 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-18 00:56:50.395060 | orchestrator | Thursday 18 September 2025 00:56:22 +0000 (0:00:36.699) 0:00:41.474 **** 2025-09-18 00:56:50.395071 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-18 00:56:50.395082 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-18 00:56:50.395092 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-18 00:56:50.395103 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-18 00:56:50.395113 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-18 00:56:50.395124 | orchestrator | 2025-09-18 00:56:50.395135 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-18 00:56:50.395145 | orchestrator | Thursday 18 September 2025 00:56:26 +0000 (0:00:04.126) 0:00:45.600 **** 2025-09-18 00:56:50.395156 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-18 00:56:50.395167 | orchestrator | 2025-09-18 00:56:50.395177 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-18 00:56:50.395188 | orchestrator | Thursday 18 September 2025 00:56:26 +0000 (0:00:00.464) 0:00:46.065 **** 2025-09-18 00:56:50.395198 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:56:50.395209 | orchestrator | 2025-09-18 00:56:50.395220 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-18 00:56:50.395308 | orchestrator | Thursday 18 September 2025 00:56:26 +0000 (0:00:00.126) 0:00:46.192 **** 2025-09-18 00:56:50.395528 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:56:50.395541 | orchestrator | 2025-09-18 00:56:50.395552 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-18 00:56:50.395563 | orchestrator | Thursday 18 September 2025 00:56:27 +0000 (0:00:00.320) 0:00:46.512 **** 2025-09-18 00:56:50.395574 | orchestrator | changed: [testbed-manager] 2025-09-18 00:56:50.395585 | orchestrator | 2025-09-18 00:56:50.395596 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-18 00:56:50.395655 | orchestrator | Thursday 18 September 2025 00:56:29 +0000 (0:00:02.081) 0:00:48.594 **** 2025-09-18 00:56:50.395683 | orchestrator | changed: [testbed-manager] 2025-09-18 00:56:50.395694 | orchestrator | 2025-09-18 00:56:50.395705 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-18 00:56:50.395716 | orchestrator | Thursday 18 September 2025 00:56:30 +0000 (0:00:00.779) 0:00:49.373 **** 2025-09-18 00:56:50.395727 | orchestrator | changed: [testbed-manager] 2025-09-18 00:56:50.395738 | orchestrator | 2025-09-18 00:56:50.395748 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-18 00:56:50.395759 | orchestrator | Thursday 18 September 2025 00:56:30 +0000 (0:00:00.585) 0:00:49.958 **** 2025-09-18 00:56:50.395770 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-18 00:56:50.395780 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-18 00:56:50.395791 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-18 00:56:50.395856 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-18 00:56:50.395870 | orchestrator | 2025-09-18 00:56:50.395880 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:56:50.396240 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 00:56:50.396253 | orchestrator | 2025-09-18 00:56:50.396264 | orchestrator | 2025-09-18 00:56:50.396286 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:56:50.396297 | orchestrator | Thursday 18 September 2025 00:56:32 +0000 (0:00:01.515) 0:00:51.474 **** 2025-09-18 00:56:50.396308 | orchestrator | =============================================================================== 2025-09-18 00:56:50.396319 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.70s 2025-09-18 00:56:50.396329 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.13s 2025-09-18 00:56:50.396340 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.08s 2025-09-18 00:56:50.396351 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.52s 2025-09-18 00:56:50.396370 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.19s 2025-09-18 00:56:50.396381 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-09-18 00:56:50.396391 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.05s 2025-09-18 00:56:50.396402 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2025-09-18 00:56:50.396413 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-09-18 00:56:50.396423 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2025-09-18 00:56:50.396459 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-09-18 00:56:50.396470 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-09-18 00:56:50.396481 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-18 00:56:50.396491 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-18 00:56:50.396502 | orchestrator | 2025-09-18 00:56:50.396512 | orchestrator | 2025-09-18 00:56:50.396523 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:56:50.396534 | orchestrator | 2025-09-18 00:56:50.396545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:56:50.396555 | orchestrator | Thursday 18 September 2025 00:56:36 +0000 (0:00:00.168) 0:00:00.168 **** 2025-09-18 00:56:50.396566 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.396576 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.396587 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.396598 | orchestrator | 2025-09-18 00:56:50.396608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:56:50.396619 | orchestrator | Thursday 18 September 2025 00:56:37 +0000 (0:00:00.257) 0:00:00.426 **** 2025-09-18 00:56:50.396630 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-18 00:56:50.396993 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-18 00:56:50.397012 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-18 00:56:50.397023 | orchestrator | 2025-09-18 00:56:50.397034 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-18 00:56:50.397045 | orchestrator | 2025-09-18 00:56:50.397055 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-18 00:56:50.397066 | orchestrator | Thursday 18 September 2025 00:56:37 +0000 (0:00:00.545) 0:00:00.972 **** 2025-09-18 00:56:50.397077 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.397088 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.397098 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.397109 | orchestrator | 2025-09-18 00:56:50.397120 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:56:50.397131 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:56:50.397143 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:56:50.397154 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:56:50.397165 | orchestrator | 2025-09-18 00:56:50.397175 | orchestrator | 2025-09-18 00:56:50.397186 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:56:50.397197 | orchestrator | Thursday 18 September 2025 00:56:38 +0000 (0:00:00.758) 0:00:01.731 **** 2025-09-18 00:56:50.397207 | orchestrator | =============================================================================== 2025-09-18 00:56:50.397218 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.76s 2025-09-18 00:56:50.397228 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-18 00:56:50.397239 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-18 00:56:50.397250 | orchestrator | 2025-09-18 00:56:50.397260 | orchestrator | 2025-09-18 00:56:50.397271 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:56:50.397282 | orchestrator | 2025-09-18 00:56:50.397292 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:56:50.397357 | orchestrator | Thursday 18 September 2025 00:54:00 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-18 00:56:50.397371 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.397382 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.397393 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.397403 | orchestrator | 2025-09-18 00:56:50.397414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:56:50.397425 | orchestrator | Thursday 18 September 2025 00:54:00 +0000 (0:00:00.300) 0:00:00.563 **** 2025-09-18 00:56:50.397456 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-18 00:56:50.397468 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-18 00:56:50.397479 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-18 00:56:50.397490 | orchestrator | 2025-09-18 00:56:50.397501 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-18 00:56:50.397512 | orchestrator | 2025-09-18 00:56:50.397562 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.397575 | orchestrator | Thursday 18 September 2025 00:54:01 +0000 (0:00:00.420) 0:00:00.983 **** 2025-09-18 00:56:50.397586 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:56:50.397597 | orchestrator | 2025-09-18 00:56:50.397608 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-18 00:56:50.397619 | orchestrator | Thursday 18 September 2025 00:54:01 +0000 (0:00:00.527) 0:00:01.511 **** 2025-09-18 00:56:50.397642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.397666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.397679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.397692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.397815 | orchestrator | 2025-09-18 00:56:50.397826 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-18 00:56:50.397837 | orchestrator | Thursday 18 September 2025 00:54:03 +0000 (0:00:01.628) 0:00:03.139 **** 2025-09-18 00:56:50.397848 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-18 00:56:50.397859 | orchestrator | 2025-09-18 00:56:50.397870 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-18 00:56:50.397880 | orchestrator | Thursday 18 September 2025 00:54:04 +0000 (0:00:00.939) 0:00:04.079 **** 2025-09-18 00:56:50.397891 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.397902 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.397912 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.397923 | orchestrator | 2025-09-18 00:56:50.397934 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-18 00:56:50.397944 | orchestrator | Thursday 18 September 2025 00:54:05 +0000 (0:00:00.579) 0:00:04.658 **** 2025-09-18 00:56:50.397955 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:56:50.397966 | orchestrator | 2025-09-18 00:56:50.397976 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.397994 | orchestrator | Thursday 18 September 2025 00:54:05 +0000 (0:00:00.792) 0:00:05.451 **** 2025-09-18 00:56:50.398004 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:56:50.398063 | orchestrator | 2025-09-18 00:56:50.398084 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-18 00:56:50.398096 | orchestrator | Thursday 18 September 2025 00:54:06 +0000 (0:00:00.523) 0:00:05.974 **** 2025-09-18 00:56:50.398113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398244 | orchestrator | 2025-09-18 00:56:50.398256 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-18 00:56:50.398267 | orchestrator | Thursday 18 September 2025 00:54:09 +0000 (0:00:03.307) 0:00:09.282 **** 2025-09-18 00:56:50.398279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398340 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.398352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398393 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.398413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398488 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.398498 | orchestrator | 2025-09-18 00:56:50.398509 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-18 00:56:50.398520 | orchestrator | Thursday 18 September 2025 00:54:10 +0000 (0:00:00.838) 0:00:10.121 **** 2025-09-18 00:56:50.398532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398574 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.398599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398634 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.398646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 00:56:50.398664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 00:56:50.398693 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.398704 | orchestrator | 2025-09-18 00:56:50.398715 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-18 00:56:50.398726 | orchestrator | Thursday 18 September 2025 00:54:11 +0000 (0:00:00.788) 0:00:10.910 **** 2025-09-18 00:56:50.398742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.398873 | orchestrator | 2025-09-18 00:56:50.398884 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-18 00:56:50.398895 | orchestrator | Thursday 18 September 2025 00:54:14 +0000 (0:00:03.355) 0:00:14.265 **** 2025-09-18 00:56:50.398913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.398973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.398985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.399009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399044 | orchestrator | 2025-09-18 00:56:50.399055 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-18 00:56:50.399072 | orchestrator | Thursday 18 September 2025 00:54:20 +0000 (0:00:05.645) 0:00:19.911 **** 2025-09-18 00:56:50.399083 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.399094 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:56:50.399105 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:56:50.399115 | orchestrator | 2025-09-18 00:56:50.399126 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-18 00:56:50.399137 | orchestrator | Thursday 18 September 2025 00:54:21 +0000 (0:00:01.443) 0:00:21.355 **** 2025-09-18 00:56:50.399148 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.399159 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.399169 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.399180 | orchestrator | 2025-09-18 00:56:50.399191 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-18 00:56:50.399201 | orchestrator | Thursday 18 September 2025 00:54:22 +0000 (0:00:00.544) 0:00:21.899 **** 2025-09-18 00:56:50.399315 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.399329 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.399340 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.399351 | orchestrator | 2025-09-18 00:56:50.399362 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-18 00:56:50.399372 | orchestrator | Thursday 18 September 2025 00:54:22 +0000 (0:00:00.288) 0:00:22.188 **** 2025-09-18 00:56:50.399383 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.399394 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.399405 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.399416 | orchestrator | 2025-09-18 00:56:50.399427 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-18 00:56:50.399454 | orchestrator | Thursday 18 September 2025 00:54:23 +0000 (0:00:00.529) 0:00:22.717 **** 2025-09-18 00:56:50.399467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.399496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.399509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.399534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.399547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.399559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 00:56:50.399579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.399626 | orchestrator | 2025-09-18 00:56:50.399637 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.399652 | orchestrator | Thursday 18 September 2025 00:54:25 +0000 (0:00:02.461) 0:00:25.179 **** 2025-09-18 00:56:50.399671 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.399690 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.399718 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.399740 | orchestrator | 2025-09-18 00:56:50.399758 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-18 00:56:50.399777 | orchestrator | Thursday 18 September 2025 00:54:25 +0000 (0:00:00.312) 0:00:25.491 **** 2025-09-18 00:56:50.399795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 00:56:50.399815 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 00:56:50.399835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 00:56:50.399853 | orchestrator | 2025-09-18 00:56:50.399870 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-18 00:56:50.399881 | orchestrator | Thursday 18 September 2025 00:54:27 +0000 (0:00:01.681) 0:00:27.173 **** 2025-09-18 00:56:50.399892 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:56:50.399903 | orchestrator | 2025-09-18 00:56:50.399913 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-18 00:56:50.399924 | orchestrator | Thursday 18 September 2025 00:54:28 +0000 (0:00:00.891) 0:00:28.064 **** 2025-09-18 00:56:50.399935 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.399949 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.399961 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.399973 | orchestrator | 2025-09-18 00:56:50.399986 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-18 00:56:50.399998 | orchestrator | Thursday 18 September 2025 00:54:29 +0000 (0:00:00.837) 0:00:28.902 **** 2025-09-18 00:56:50.400010 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:56:50.400022 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 00:56:50.400035 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 00:56:50.400048 | orchestrator | 2025-09-18 00:56:50.400061 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-18 00:56:50.400074 | orchestrator | Thursday 18 September 2025 00:54:30 +0000 (0:00:01.081) 0:00:29.983 **** 2025-09-18 00:56:50.400086 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.400099 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.400111 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.400124 | orchestrator | 2025-09-18 00:56:50.400137 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-18 00:56:50.400149 | orchestrator | Thursday 18 September 2025 00:54:30 +0000 (0:00:00.316) 0:00:30.299 **** 2025-09-18 00:56:50.400250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 00:56:50.400263 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 00:56:50.400275 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 00:56:50.400298 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 00:56:50.400312 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 00:56:50.400333 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 00:56:50.400347 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 00:56:50.400359 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 00:56:50.400370 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 00:56:50.400381 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 00:56:50.400399 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 00:56:50.400410 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 00:56:50.400421 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 00:56:50.400487 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 00:56:50.400500 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 00:56:50.400511 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 00:56:50.400522 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 00:56:50.400533 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 00:56:50.400544 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 00:56:50.400555 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 00:56:50.400565 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 00:56:50.400576 | orchestrator | 2025-09-18 00:56:50.400587 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-18 00:56:50.400598 | orchestrator | Thursday 18 September 2025 00:54:39 +0000 (0:00:09.149) 0:00:39.449 **** 2025-09-18 00:56:50.400609 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 00:56:50.400619 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 00:56:50.400630 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 00:56:50.400641 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 00:56:50.400651 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 00:56:50.400662 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 00:56:50.400673 | orchestrator | 2025-09-18 00:56:50.400684 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-18 00:56:50.400694 | orchestrator | Thursday 18 September 2025 00:54:42 +0000 (0:00:02.918) 0:00:42.367 **** 2025-09-18 00:56:50.400707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.400836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.400858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 00:56:50.400870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 00:56:50.400962 | orchestrator | 2025-09-18 00:56:50.400973 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.400984 | orchestrator | Thursday 18 September 2025 00:54:45 +0000 (0:00:02.327) 0:00:44.695 **** 2025-09-18 00:56:50.400995 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.401006 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.401016 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.401026 | orchestrator | 2025-09-18 00:56:50.401035 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-18 00:56:50.401045 | orchestrator | Thursday 18 September 2025 00:54:45 +0000 (0:00:00.312) 0:00:45.008 **** 2025-09-18 00:56:50.401054 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401064 | orchestrator | 2025-09-18 00:56:50.401074 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-18 00:56:50.401083 | orchestrator | Thursday 18 September 2025 00:54:47 +0000 (0:00:02.212) 0:00:47.221 **** 2025-09-18 00:56:50.401093 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401103 | orchestrator | 2025-09-18 00:56:50.401112 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-18 00:56:50.401122 | orchestrator | Thursday 18 September 2025 00:54:49 +0000 (0:00:02.286) 0:00:49.507 **** 2025-09-18 00:56:50.401131 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.401141 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.401151 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.401160 | orchestrator | 2025-09-18 00:56:50.401170 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-18 00:56:50.401186 | orchestrator | Thursday 18 September 2025 00:54:50 +0000 (0:00:00.941) 0:00:50.449 **** 2025-09-18 00:56:50.401196 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.401205 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.401215 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.401224 | orchestrator | 2025-09-18 00:56:50.401234 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-18 00:56:50.401244 | orchestrator | Thursday 18 September 2025 00:54:51 +0000 (0:00:00.577) 0:00:51.027 **** 2025-09-18 00:56:50.401253 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.401263 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.401273 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.401282 | orchestrator | 2025-09-18 00:56:50.401292 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-18 00:56:50.401302 | orchestrator | Thursday 18 September 2025 00:54:51 +0000 (0:00:00.376) 0:00:51.403 **** 2025-09-18 00:56:50.401312 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401321 | orchestrator | 2025-09-18 00:56:50.401331 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-18 00:56:50.401340 | orchestrator | Thursday 18 September 2025 00:55:06 +0000 (0:00:14.784) 0:01:06.187 **** 2025-09-18 00:56:50.401350 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401360 | orchestrator | 2025-09-18 00:56:50.401369 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 00:56:50.401379 | orchestrator | Thursday 18 September 2025 00:55:17 +0000 (0:00:10.474) 0:01:16.661 **** 2025-09-18 00:56:50.401389 | orchestrator | 2025-09-18 00:56:50.401399 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 00:56:50.401408 | orchestrator | Thursday 18 September 2025 00:55:17 +0000 (0:00:00.065) 0:01:16.726 **** 2025-09-18 00:56:50.401418 | orchestrator | 2025-09-18 00:56:50.401428 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 00:56:50.401460 | orchestrator | Thursday 18 September 2025 00:55:17 +0000 (0:00:00.064) 0:01:16.791 **** 2025-09-18 00:56:50.401472 | orchestrator | 2025-09-18 00:56:50.401482 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-18 00:56:50.401494 | orchestrator | Thursday 18 September 2025 00:55:17 +0000 (0:00:00.067) 0:01:16.858 **** 2025-09-18 00:56:50.401505 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401516 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:56:50.401526 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:56:50.401537 | orchestrator | 2025-09-18 00:56:50.401547 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-18 00:56:50.401557 | orchestrator | Thursday 18 September 2025 00:55:36 +0000 (0:00:19.471) 0:01:36.330 **** 2025-09-18 00:56:50.401566 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401576 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:56:50.401585 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:56:50.401595 | orchestrator | 2025-09-18 00:56:50.401605 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-18 00:56:50.401614 | orchestrator | Thursday 18 September 2025 00:55:46 +0000 (0:00:09.846) 0:01:46.177 **** 2025-09-18 00:56:50.401624 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401634 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:56:50.401649 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:56:50.401659 | orchestrator | 2025-09-18 00:56:50.401669 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.401679 | orchestrator | Thursday 18 September 2025 00:55:58 +0000 (0:00:11.563) 0:01:57.741 **** 2025-09-18 00:56:50.401689 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:56:50.401698 | orchestrator | 2025-09-18 00:56:50.401708 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-18 00:56:50.401718 | orchestrator | Thursday 18 September 2025 00:55:58 +0000 (0:00:00.738) 0:01:58.479 **** 2025-09-18 00:56:50.401734 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.401744 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:56:50.401770 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:56:50.401780 | orchestrator | 2025-09-18 00:56:50.401790 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-18 00:56:50.401799 | orchestrator | Thursday 18 September 2025 00:55:59 +0000 (0:00:00.779) 0:01:59.259 **** 2025-09-18 00:56:50.401809 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:56:50.401819 | orchestrator | 2025-09-18 00:56:50.401829 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-18 00:56:50.401838 | orchestrator | Thursday 18 September 2025 00:56:01 +0000 (0:00:01.877) 0:02:01.136 **** 2025-09-18 00:56:50.401848 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-18 00:56:50.401858 | orchestrator | 2025-09-18 00:56:50.401868 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-18 00:56:50.401877 | orchestrator | Thursday 18 September 2025 00:56:13 +0000 (0:00:11.658) 0:02:12.795 **** 2025-09-18 00:56:50.401887 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-18 00:56:50.401897 | orchestrator | 2025-09-18 00:56:50.401906 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-18 00:56:50.401916 | orchestrator | Thursday 18 September 2025 00:56:36 +0000 (0:00:23.262) 0:02:36.058 **** 2025-09-18 00:56:50.401925 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-18 00:56:50.401935 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-18 00:56:50.401945 | orchestrator | 2025-09-18 00:56:50.401955 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-18 00:56:50.401964 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:07.498) 0:02:43.556 **** 2025-09-18 00:56:50.401974 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.401984 | orchestrator | 2025-09-18 00:56:50.401993 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-18 00:56:50.402003 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.262) 0:02:43.818 **** 2025-09-18 00:56:50.402012 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.402052 | orchestrator | 2025-09-18 00:56:50.402061 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-18 00:56:50.402071 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.134) 0:02:43.953 **** 2025-09-18 00:56:50.402081 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.402090 | orchestrator | 2025-09-18 00:56:50.402100 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-18 00:56:50.402110 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.127) 0:02:44.080 **** 2025-09-18 00:56:50.402120 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.402129 | orchestrator | 2025-09-18 00:56:50.402139 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-18 00:56:50.402149 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.426) 0:02:44.507 **** 2025-09-18 00:56:50.402159 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:56:50.402168 | orchestrator | 2025-09-18 00:56:50.402192 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 00:56:50.402212 | orchestrator | Thursday 18 September 2025 00:56:48 +0000 (0:00:03.548) 0:02:48.056 **** 2025-09-18 00:56:50.402222 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:56:50.402232 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:56:50.402241 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:56:50.402251 | orchestrator | 2025-09-18 00:56:50.402261 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:56:50.402271 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-18 00:56:50.402282 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-18 00:56:50.402299 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-18 00:56:50.402309 | orchestrator | 2025-09-18 00:56:50.402318 | orchestrator | 2025-09-18 00:56:50.402328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:56:50.402338 | orchestrator | Thursday 18 September 2025 00:56:48 +0000 (0:00:00.438) 0:02:48.495 **** 2025-09-18 00:56:50.402348 | orchestrator | =============================================================================== 2025-09-18 00:56:50.402357 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.26s 2025-09-18 00:56:50.402367 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.47s 2025-09-18 00:56:50.402376 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.78s 2025-09-18 00:56:50.402386 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.66s 2025-09-18 00:56:50.402396 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.56s 2025-09-18 00:56:50.402411 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.47s 2025-09-18 00:56:50.402421 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.85s 2025-09-18 00:56:50.402430 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.15s 2025-09-18 00:56:50.402488 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.50s 2025-09-18 00:56:50.402498 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.65s 2025-09-18 00:56:50.402508 | orchestrator | keystone : Creating default user role ----------------------------------- 3.55s 2025-09-18 00:56:50.402518 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.36s 2025-09-18 00:56:50.402533 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.31s 2025-09-18 00:56:50.402543 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.92s 2025-09-18 00:56:50.402552 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.46s 2025-09-18 00:56:50.402562 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-09-18 00:56:50.402572 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2025-09-18 00:56:50.402582 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2025-09-18 00:56:50.402591 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.88s 2025-09-18 00:56:50.402601 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.68s 2025-09-18 00:56:50.402611 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:50.402621 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:50.402631 | orchestrator | 2025-09-18 00:56:50 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:56:50.402641 | orchestrator | 2025-09-18 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:53.434903 | orchestrator | 2025-09-18 00:56:53 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:53.438088 | orchestrator | 2025-09-18 00:56:53 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:53.441350 | orchestrator | 2025-09-18 00:56:53 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:53.444107 | orchestrator | 2025-09-18 00:56:53 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:53.446437 | orchestrator | 2025-09-18 00:56:53 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:56:53.447471 | orchestrator | 2025-09-18 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:56.484001 | orchestrator | 2025-09-18 00:56:56 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:56.484516 | orchestrator | 2025-09-18 00:56:56 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:56.485552 | orchestrator | 2025-09-18 00:56:56 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:56.487868 | orchestrator | 2025-09-18 00:56:56 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:56.488902 | orchestrator | 2025-09-18 00:56:56 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:56:56.489135 | orchestrator | 2025-09-18 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:56:59.538113 | orchestrator | 2025-09-18 00:56:59 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:56:59.538218 | orchestrator | 2025-09-18 00:56:59 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:56:59.539232 | orchestrator | 2025-09-18 00:56:59 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:56:59.540406 | orchestrator | 2025-09-18 00:56:59 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:56:59.542644 | orchestrator | 2025-09-18 00:56:59 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:56:59.542667 | orchestrator | 2025-09-18 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:02.812320 | orchestrator | 2025-09-18 00:57:02 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:57:02.812423 | orchestrator | 2025-09-18 00:57:02 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:02.812437 | orchestrator | 2025-09-18 00:57:02 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:02.812449 | orchestrator | 2025-09-18 00:57:02 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:02.812507 | orchestrator | 2025-09-18 00:57:02 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:02.812520 | orchestrator | 2025-09-18 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:05.611889 | orchestrator | 2025-09-18 00:57:05 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:57:05.613706 | orchestrator | 2025-09-18 00:57:05 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:05.614445 | orchestrator | 2025-09-18 00:57:05 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:05.615345 | orchestrator | 2025-09-18 00:57:05 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:05.617017 | orchestrator | 2025-09-18 00:57:05 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:05.617040 | orchestrator | 2025-09-18 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:08.650624 | orchestrator | 2025-09-18 00:57:08 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:57:08.653583 | orchestrator | 2025-09-18 00:57:08 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:08.654425 | orchestrator | 2025-09-18 00:57:08 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:08.655219 | orchestrator | 2025-09-18 00:57:08 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:08.656816 | orchestrator | 2025-09-18 00:57:08 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:08.656917 | orchestrator | 2025-09-18 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:11.689942 | orchestrator | 2025-09-18 00:57:11 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:57:11.690353 | orchestrator | 2025-09-18 00:57:11 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:11.691443 | orchestrator | 2025-09-18 00:57:11 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:11.692081 | orchestrator | 2025-09-18 00:57:11 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:11.692902 | orchestrator | 2025-09-18 00:57:11 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:11.692927 | orchestrator | 2025-09-18 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:14.728219 | orchestrator | 2025-09-18 00:57:14 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state STARTED 2025-09-18 00:57:14.728619 | orchestrator | 2025-09-18 00:57:14 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:14.729208 | orchestrator | 2025-09-18 00:57:14 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:14.729767 | orchestrator | 2025-09-18 00:57:14 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:14.730299 | orchestrator | 2025-09-18 00:57:14 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:14.730364 | orchestrator | 2025-09-18 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:17.759353 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task e68da105-3968-4537-897c-6fb30fe47ba9 is in state SUCCESS 2025-09-18 00:57:17.760634 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:17.770542 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:17.770582 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:17.770652 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:17.770968 | orchestrator | 2025-09-18 00:57:17 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:17.771900 | orchestrator | 2025-09-18 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:20.807768 | orchestrator | 2025-09-18 00:57:20 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:20.808188 | orchestrator | 2025-09-18 00:57:20 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:20.809492 | orchestrator | 2025-09-18 00:57:20 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:20.813595 | orchestrator | 2025-09-18 00:57:20 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:20.814369 | orchestrator | 2025-09-18 00:57:20 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:20.815837 | orchestrator | 2025-09-18 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:23.841526 | orchestrator | 2025-09-18 00:57:23 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:23.842177 | orchestrator | 2025-09-18 00:57:23 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:23.843080 | orchestrator | 2025-09-18 00:57:23 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:23.843751 | orchestrator | 2025-09-18 00:57:23 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:23.845172 | orchestrator | 2025-09-18 00:57:23 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:23.845207 | orchestrator | 2025-09-18 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:26.871198 | orchestrator | 2025-09-18 00:57:26 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:26.872078 | orchestrator | 2025-09-18 00:57:26 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:26.872679 | orchestrator | 2025-09-18 00:57:26 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:26.874155 | orchestrator | 2025-09-18 00:57:26 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:26.874993 | orchestrator | 2025-09-18 00:57:26 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:26.875021 | orchestrator | 2025-09-18 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:29.903375 | orchestrator | 2025-09-18 00:57:29 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:29.903642 | orchestrator | 2025-09-18 00:57:29 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:29.904396 | orchestrator | 2025-09-18 00:57:29 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:29.905077 | orchestrator | 2025-09-18 00:57:29 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:29.905752 | orchestrator | 2025-09-18 00:57:29 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:29.905773 | orchestrator | 2025-09-18 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:32.937959 | orchestrator | 2025-09-18 00:57:32 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:32.938091 | orchestrator | 2025-09-18 00:57:32 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:32.938767 | orchestrator | 2025-09-18 00:57:32 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:32.939178 | orchestrator | 2025-09-18 00:57:32 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:32.941116 | orchestrator | 2025-09-18 00:57:32 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:32.941158 | orchestrator | 2025-09-18 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:35.986634 | orchestrator | 2025-09-18 00:57:35 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:35.986733 | orchestrator | 2025-09-18 00:57:35 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:35.987224 | orchestrator | 2025-09-18 00:57:35 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:35.987794 | orchestrator | 2025-09-18 00:57:35 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:35.988510 | orchestrator | 2025-09-18 00:57:35 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:35.988537 | orchestrator | 2025-09-18 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:39.016313 | orchestrator | 2025-09-18 00:57:39 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:39.018200 | orchestrator | 2025-09-18 00:57:39 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:39.019188 | orchestrator | 2025-09-18 00:57:39 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:39.019668 | orchestrator | 2025-09-18 00:57:39 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:39.020910 | orchestrator | 2025-09-18 00:57:39 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:39.020948 | orchestrator | 2025-09-18 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:42.046413 | orchestrator | 2025-09-18 00:57:42 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:42.047441 | orchestrator | 2025-09-18 00:57:42 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:42.048494 | orchestrator | 2025-09-18 00:57:42 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:42.049245 | orchestrator | 2025-09-18 00:57:42 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:42.050425 | orchestrator | 2025-09-18 00:57:42 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:42.050451 | orchestrator | 2025-09-18 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:45.071922 | orchestrator | 2025-09-18 00:57:45 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:45.072128 | orchestrator | 2025-09-18 00:57:45 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:45.072748 | orchestrator | 2025-09-18 00:57:45 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:45.073222 | orchestrator | 2025-09-18 00:57:45 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:45.073923 | orchestrator | 2025-09-18 00:57:45 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:45.073943 | orchestrator | 2025-09-18 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:48.097602 | orchestrator | 2025-09-18 00:57:48 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:48.098423 | orchestrator | 2025-09-18 00:57:48 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:48.098724 | orchestrator | 2025-09-18 00:57:48 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:48.099440 | orchestrator | 2025-09-18 00:57:48 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:48.100265 | orchestrator | 2025-09-18 00:57:48 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:48.100283 | orchestrator | 2025-09-18 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:51.124096 | orchestrator | 2025-09-18 00:57:51 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:51.124185 | orchestrator | 2025-09-18 00:57:51 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:51.124668 | orchestrator | 2025-09-18 00:57:51 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:51.126715 | orchestrator | 2025-09-18 00:57:51 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:51.127287 | orchestrator | 2025-09-18 00:57:51 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:51.127308 | orchestrator | 2025-09-18 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:54.147598 | orchestrator | 2025-09-18 00:57:54 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:54.147699 | orchestrator | 2025-09-18 00:57:54 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:54.148257 | orchestrator | 2025-09-18 00:57:54 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:54.148882 | orchestrator | 2025-09-18 00:57:54 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:54.149408 | orchestrator | 2025-09-18 00:57:54 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:54.149521 | orchestrator | 2025-09-18 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:57:57.172716 | orchestrator | 2025-09-18 00:57:57 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:57:57.173065 | orchestrator | 2025-09-18 00:57:57 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:57:57.174173 | orchestrator | 2025-09-18 00:57:57 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:57:57.174877 | orchestrator | 2025-09-18 00:57:57 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:57:57.175574 | orchestrator | 2025-09-18 00:57:57 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:57:57.175600 | orchestrator | 2025-09-18 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:00.201842 | orchestrator | 2025-09-18 00:58:00 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:00.218065 | orchestrator | 2025-09-18 00:58:00 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:00.218117 | orchestrator | 2025-09-18 00:58:00 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:58:00.218130 | orchestrator | 2025-09-18 00:58:00 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:00.218141 | orchestrator | 2025-09-18 00:58:00 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:00.218152 | orchestrator | 2025-09-18 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:03.224772 | orchestrator | 2025-09-18 00:58:03 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:03.224864 | orchestrator | 2025-09-18 00:58:03 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:03.225262 | orchestrator | 2025-09-18 00:58:03 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state STARTED 2025-09-18 00:58:03.225705 | orchestrator | 2025-09-18 00:58:03 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:03.227171 | orchestrator | 2025-09-18 00:58:03 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:03.227195 | orchestrator | 2025-09-18 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:06.252709 | orchestrator | 2025-09-18 00:58:06 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:06.252848 | orchestrator | 2025-09-18 00:58:06 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:06.253338 | orchestrator | 2025-09-18 00:58:06 | INFO  | Task 5390112b-a75b-4f48-b5f2-a05037c94f43 is in state SUCCESS 2025-09-18 00:58:06.253686 | orchestrator | 2025-09-18 00:58:06.253711 | orchestrator | 2025-09-18 00:58:06.253722 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:58:06.253734 | orchestrator | 2025-09-18 00:58:06.253745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:58:06.253756 | orchestrator | Thursday 18 September 2025 00:56:42 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-18 00:58:06.253767 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:58:06.253779 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:58:06.253790 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:58:06.253801 | orchestrator | ok: [testbed-manager] 2025-09-18 00:58:06.253812 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:58:06.253901 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:58:06.253919 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:58:06.253932 | orchestrator | 2025-09-18 00:58:06.253951 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:58:06.253970 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:00.824) 0:00:01.104 **** 2025-09-18 00:58:06.253988 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254006 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254084 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254106 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254123 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254143 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254163 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-18 00:58:06.254181 | orchestrator | 2025-09-18 00:58:06.254201 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-18 00:58:06.254213 | orchestrator | 2025-09-18 00:58:06.254224 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-18 00:58:06.254234 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.981) 0:00:02.086 **** 2025-09-18 00:58:06.254246 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:58:06.254258 | orchestrator | 2025-09-18 00:58:06.254269 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-18 00:58:06.254279 | orchestrator | Thursday 18 September 2025 00:56:45 +0000 (0:00:01.307) 0:00:03.393 **** 2025-09-18 00:58:06.254290 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-18 00:58:06.254301 | orchestrator | 2025-09-18 00:58:06.254312 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-18 00:58:06.254323 | orchestrator | Thursday 18 September 2025 00:56:50 +0000 (0:00:04.113) 0:00:07.507 **** 2025-09-18 00:58:06.254335 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-18 00:58:06.254346 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-18 00:58:06.254357 | orchestrator | 2025-09-18 00:58:06.254382 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-18 00:58:06.254393 | orchestrator | Thursday 18 September 2025 00:56:55 +0000 (0:00:05.833) 0:00:13.340 **** 2025-09-18 00:58:06.254404 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 00:58:06.254415 | orchestrator | 2025-09-18 00:58:06.254426 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-18 00:58:06.254451 | orchestrator | Thursday 18 September 2025 00:56:58 +0000 (0:00:03.021) 0:00:16.361 **** 2025-09-18 00:58:06.254649 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 00:58:06.254664 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-18 00:58:06.254676 | orchestrator | 2025-09-18 00:58:06.254689 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-18 00:58:06.254701 | orchestrator | Thursday 18 September 2025 00:57:03 +0000 (0:00:04.219) 0:00:20.581 **** 2025-09-18 00:58:06.254714 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 00:58:06.254726 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-18 00:58:06.254738 | orchestrator | 2025-09-18 00:58:06.254752 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-18 00:58:06.254764 | orchestrator | Thursday 18 September 2025 00:57:10 +0000 (0:00:07.090) 0:00:27.672 **** 2025-09-18 00:58:06.254776 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-18 00:58:06.254788 | orchestrator | 2025-09-18 00:58:06.254802 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:58:06.254814 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254827 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254840 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254853 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254865 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254892 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254903 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.254914 | orchestrator | 2025-09-18 00:58:06.254925 | orchestrator | 2025-09-18 00:58:06.254937 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:58:06.254948 | orchestrator | Thursday 18 September 2025 00:57:15 +0000 (0:00:04.976) 0:00:32.649 **** 2025-09-18 00:58:06.254959 | orchestrator | =============================================================================== 2025-09-18 00:58:06.254970 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.09s 2025-09-18 00:58:06.254981 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.83s 2025-09-18 00:58:06.254992 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.98s 2025-09-18 00:58:06.255003 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.22s 2025-09-18 00:58:06.255013 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.11s 2025-09-18 00:58:06.255024 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.02s 2025-09-18 00:58:06.255035 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.31s 2025-09-18 00:58:06.255046 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-09-18 00:58:06.255057 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2025-09-18 00:58:06.255067 | orchestrator | 2025-09-18 00:58:06.255078 | orchestrator | 2025-09-18 00:58:06.255089 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-18 00:58:06.255100 | orchestrator | 2025-09-18 00:58:06.255111 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-18 00:58:06.255132 | orchestrator | Thursday 18 September 2025 00:56:36 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-18 00:58:06.255143 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255154 | orchestrator | 2025-09-18 00:58:06.255165 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-18 00:58:06.255180 | orchestrator | Thursday 18 September 2025 00:56:38 +0000 (0:00:01.459) 0:00:01.675 **** 2025-09-18 00:58:06.255199 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255220 | orchestrator | 2025-09-18 00:58:06.255239 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-18 00:58:06.255258 | orchestrator | Thursday 18 September 2025 00:56:39 +0000 (0:00:00.925) 0:00:02.600 **** 2025-09-18 00:58:06.255277 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255296 | orchestrator | 2025-09-18 00:58:06.255314 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-18 00:58:06.255334 | orchestrator | Thursday 18 September 2025 00:56:40 +0000 (0:00:00.899) 0:00:03.500 **** 2025-09-18 00:58:06.255353 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255368 | orchestrator | 2025-09-18 00:58:06.255379 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-18 00:58:06.255390 | orchestrator | Thursday 18 September 2025 00:56:41 +0000 (0:00:01.570) 0:00:05.071 **** 2025-09-18 00:58:06.255409 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255420 | orchestrator | 2025-09-18 00:58:06.255431 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-18 00:58:06.255441 | orchestrator | Thursday 18 September 2025 00:56:42 +0000 (0:00:01.202) 0:00:06.273 **** 2025-09-18 00:58:06.255452 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255463 | orchestrator | 2025-09-18 00:58:06.255511 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-18 00:58:06.255523 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:00.898) 0:00:07.172 **** 2025-09-18 00:58:06.255534 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255545 | orchestrator | 2025-09-18 00:58:06.255556 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-18 00:58:06.255566 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:01.153) 0:00:08.325 **** 2025-09-18 00:58:06.255577 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255588 | orchestrator | 2025-09-18 00:58:06.255599 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-18 00:58:06.255610 | orchestrator | Thursday 18 September 2025 00:56:45 +0000 (0:00:00.958) 0:00:09.284 **** 2025-09-18 00:58:06.255621 | orchestrator | changed: [testbed-manager] 2025-09-18 00:58:06.255632 | orchestrator | 2025-09-18 00:58:06.255642 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-18 00:58:06.255653 | orchestrator | Thursday 18 September 2025 00:57:40 +0000 (0:00:54.405) 0:01:03.690 **** 2025-09-18 00:58:06.255664 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:58:06.255675 | orchestrator | 2025-09-18 00:58:06.255686 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 00:58:06.255696 | orchestrator | 2025-09-18 00:58:06.255707 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 00:58:06.255718 | orchestrator | Thursday 18 September 2025 00:57:40 +0000 (0:00:00.319) 0:01:04.010 **** 2025-09-18 00:58:06.255729 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:58:06.255740 | orchestrator | 2025-09-18 00:58:06.255750 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 00:58:06.255761 | orchestrator | 2025-09-18 00:58:06.255772 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 00:58:06.255783 | orchestrator | Thursday 18 September 2025 00:57:52 +0000 (0:00:11.647) 0:01:15.658 **** 2025-09-18 00:58:06.255793 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:58:06.255804 | orchestrator | 2025-09-18 00:58:06.255815 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 00:58:06.255835 | orchestrator | 2025-09-18 00:58:06.255846 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 00:58:06.255857 | orchestrator | Thursday 18 September 2025 00:58:03 +0000 (0:00:11.325) 0:01:26.983 **** 2025-09-18 00:58:06.255868 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:58:06.255879 | orchestrator | 2025-09-18 00:58:06.255898 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:58:06.255910 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 00:58:06.255921 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.255932 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.255943 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 00:58:06.255954 | orchestrator | 2025-09-18 00:58:06.255965 | orchestrator | 2025-09-18 00:58:06.255976 | orchestrator | 2025-09-18 00:58:06.255987 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:58:06.255998 | orchestrator | Thursday 18 September 2025 00:58:04 +0000 (0:00:01.191) 0:01:28.175 **** 2025-09-18 00:58:06.256009 | orchestrator | =============================================================================== 2025-09-18 00:58:06.256020 | orchestrator | Create admin user ------------------------------------------------------ 54.41s 2025-09-18 00:58:06.256031 | orchestrator | Restart ceph manager service ------------------------------------------- 24.17s 2025-09-18 00:58:06.256042 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.57s 2025-09-18 00:58:06.256053 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2025-09-18 00:58:06.256064 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.20s 2025-09-18 00:58:06.256075 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2025-09-18 00:58:06.256086 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.96s 2025-09-18 00:58:06.256097 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-09-18 00:58:06.256108 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2025-09-18 00:58:06.256119 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2025-09-18 00:58:06.256130 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.32s 2025-09-18 00:58:06.256141 | orchestrator | 2025-09-18 00:58:06 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:06.256152 | orchestrator | 2025-09-18 00:58:06 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:06.256163 | orchestrator | 2025-09-18 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:09.359002 | orchestrator | 2025-09-18 00:58:09 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:09.359090 | orchestrator | 2025-09-18 00:58:09 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:09.359104 | orchestrator | 2025-09-18 00:58:09 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:09.359116 | orchestrator | 2025-09-18 00:58:09 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:09.359128 | orchestrator | 2025-09-18 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:12.320217 | orchestrator | 2025-09-18 00:58:12 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:12.320425 | orchestrator | 2025-09-18 00:58:12 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:12.320911 | orchestrator | 2025-09-18 00:58:12 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:12.321601 | orchestrator | 2025-09-18 00:58:12 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:12.321625 | orchestrator | 2025-09-18 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:15.358897 | orchestrator | 2025-09-18 00:58:15 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:15.359569 | orchestrator | 2025-09-18 00:58:15 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:15.362183 | orchestrator | 2025-09-18 00:58:15 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:15.363182 | orchestrator | 2025-09-18 00:58:15 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:15.363217 | orchestrator | 2025-09-18 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:18.398203 | orchestrator | 2025-09-18 00:58:18 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:18.398886 | orchestrator | 2025-09-18 00:58:18 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:18.399873 | orchestrator | 2025-09-18 00:58:18 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:18.401143 | orchestrator | 2025-09-18 00:58:18 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:18.401186 | orchestrator | 2025-09-18 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:21.443475 | orchestrator | 2025-09-18 00:58:21 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:21.444732 | orchestrator | 2025-09-18 00:58:21 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:21.446693 | orchestrator | 2025-09-18 00:58:21 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:21.448250 | orchestrator | 2025-09-18 00:58:21 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:21.448275 | orchestrator | 2025-09-18 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:24.492226 | orchestrator | 2025-09-18 00:58:24 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:24.493649 | orchestrator | 2025-09-18 00:58:24 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:24.495076 | orchestrator | 2025-09-18 00:58:24 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:24.497470 | orchestrator | 2025-09-18 00:58:24 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:24.497531 | orchestrator | 2025-09-18 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:27.553646 | orchestrator | 2025-09-18 00:58:27 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:27.554702 | orchestrator | 2025-09-18 00:58:27 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:27.556946 | orchestrator | 2025-09-18 00:58:27 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:27.558227 | orchestrator | 2025-09-18 00:58:27 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:27.558424 | orchestrator | 2025-09-18 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:30.603261 | orchestrator | 2025-09-18 00:58:30 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:30.603656 | orchestrator | 2025-09-18 00:58:30 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:30.605127 | orchestrator | 2025-09-18 00:58:30 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:30.606552 | orchestrator | 2025-09-18 00:58:30 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:30.606909 | orchestrator | 2025-09-18 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:33.648092 | orchestrator | 2025-09-18 00:58:33 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:33.648722 | orchestrator | 2025-09-18 00:58:33 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:33.649387 | orchestrator | 2025-09-18 00:58:33 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:33.651228 | orchestrator | 2025-09-18 00:58:33 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:33.651274 | orchestrator | 2025-09-18 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:36.810340 | orchestrator | 2025-09-18 00:58:36 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:36.812157 | orchestrator | 2025-09-18 00:58:36 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:36.815444 | orchestrator | 2025-09-18 00:58:36 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:36.816772 | orchestrator | 2025-09-18 00:58:36 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:36.817745 | orchestrator | 2025-09-18 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:39.853974 | orchestrator | 2025-09-18 00:58:39 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:39.854347 | orchestrator | 2025-09-18 00:58:39 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:39.855593 | orchestrator | 2025-09-18 00:58:39 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:39.858315 | orchestrator | 2025-09-18 00:58:39 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:39.858353 | orchestrator | 2025-09-18 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:42.900360 | orchestrator | 2025-09-18 00:58:42 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:42.900555 | orchestrator | 2025-09-18 00:58:42 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:42.901466 | orchestrator | 2025-09-18 00:58:42 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:42.902462 | orchestrator | 2025-09-18 00:58:42 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:42.903708 | orchestrator | 2025-09-18 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:45.950902 | orchestrator | 2025-09-18 00:58:45 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:45.953638 | orchestrator | 2025-09-18 00:58:45 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:45.956401 | orchestrator | 2025-09-18 00:58:45 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:45.958679 | orchestrator | 2025-09-18 00:58:45 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:45.959139 | orchestrator | 2025-09-18 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:49.002194 | orchestrator | 2025-09-18 00:58:48 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:49.005442 | orchestrator | 2025-09-18 00:58:49 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:49.011997 | orchestrator | 2025-09-18 00:58:49 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:49.014698 | orchestrator | 2025-09-18 00:58:49 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:49.014730 | orchestrator | 2025-09-18 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:52.060004 | orchestrator | 2025-09-18 00:58:52 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:52.060790 | orchestrator | 2025-09-18 00:58:52 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:52.061898 | orchestrator | 2025-09-18 00:58:52 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:52.062879 | orchestrator | 2025-09-18 00:58:52 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:52.062910 | orchestrator | 2025-09-18 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:55.101140 | orchestrator | 2025-09-18 00:58:55 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:55.101991 | orchestrator | 2025-09-18 00:58:55 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:55.102689 | orchestrator | 2025-09-18 00:58:55 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:55.105743 | orchestrator | 2025-09-18 00:58:55 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:55.105781 | orchestrator | 2025-09-18 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:58:58.137912 | orchestrator | 2025-09-18 00:58:58 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:58:58.138321 | orchestrator | 2025-09-18 00:58:58 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:58:58.139238 | orchestrator | 2025-09-18 00:58:58 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:58:58.140702 | orchestrator | 2025-09-18 00:58:58 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:58:58.140799 | orchestrator | 2025-09-18 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:01.170713 | orchestrator | 2025-09-18 00:59:01 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:01.171040 | orchestrator | 2025-09-18 00:59:01 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:01.171955 | orchestrator | 2025-09-18 00:59:01 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:01.172972 | orchestrator | 2025-09-18 00:59:01 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:01.173006 | orchestrator | 2025-09-18 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:04.198142 | orchestrator | 2025-09-18 00:59:04 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:04.199059 | orchestrator | 2025-09-18 00:59:04 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:04.199866 | orchestrator | 2025-09-18 00:59:04 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:04.201124 | orchestrator | 2025-09-18 00:59:04 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:04.201163 | orchestrator | 2025-09-18 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:07.237003 | orchestrator | 2025-09-18 00:59:07 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:07.237090 | orchestrator | 2025-09-18 00:59:07 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:07.238240 | orchestrator | 2025-09-18 00:59:07 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:07.239106 | orchestrator | 2025-09-18 00:59:07 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:07.239205 | orchestrator | 2025-09-18 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:10.282388 | orchestrator | 2025-09-18 00:59:10 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:10.283908 | orchestrator | 2025-09-18 00:59:10 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:10.284631 | orchestrator | 2025-09-18 00:59:10 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:10.285262 | orchestrator | 2025-09-18 00:59:10 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:10.285737 | orchestrator | 2025-09-18 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:13.307312 | orchestrator | 2025-09-18 00:59:13 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:13.308417 | orchestrator | 2025-09-18 00:59:13 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:13.308897 | orchestrator | 2025-09-18 00:59:13 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:13.309420 | orchestrator | 2025-09-18 00:59:13 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:13.309647 | orchestrator | 2025-09-18 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:16.350641 | orchestrator | 2025-09-18 00:59:16 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:16.351142 | orchestrator | 2025-09-18 00:59:16 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:16.353504 | orchestrator | 2025-09-18 00:59:16 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:16.354720 | orchestrator | 2025-09-18 00:59:16 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:16.354735 | orchestrator | 2025-09-18 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:19.389833 | orchestrator | 2025-09-18 00:59:19 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:19.390432 | orchestrator | 2025-09-18 00:59:19 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:19.391505 | orchestrator | 2025-09-18 00:59:19 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:19.392996 | orchestrator | 2025-09-18 00:59:19 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:19.393022 | orchestrator | 2025-09-18 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:22.456659 | orchestrator | 2025-09-18 00:59:22 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:22.456768 | orchestrator | 2025-09-18 00:59:22 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:22.457283 | orchestrator | 2025-09-18 00:59:22 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:22.458210 | orchestrator | 2025-09-18 00:59:22 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:22.458234 | orchestrator | 2025-09-18 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:25.508288 | orchestrator | 2025-09-18 00:59:25 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state STARTED 2025-09-18 00:59:25.508747 | orchestrator | 2025-09-18 00:59:25 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:25.510101 | orchestrator | 2025-09-18 00:59:25 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:25.511463 | orchestrator | 2025-09-18 00:59:25 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:25.511926 | orchestrator | 2025-09-18 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:28.556428 | orchestrator | 2025-09-18 00:59:28 | INFO  | Task be432373-8c9e-416e-9e97-c8a0dd28902c is in state SUCCESS 2025-09-18 00:59:28.556625 | orchestrator | 2025-09-18 00:59:28 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:28.559663 | orchestrator | 2025-09-18 00:59:28.559734 | orchestrator | 2025-09-18 00:59:28.559754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:59:28.559770 | orchestrator | 2025-09-18 00:59:28.559784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:59:28.559798 | orchestrator | Thursday 18 September 2025 00:56:42 +0000 (0:00:00.249) 0:00:00.249 **** 2025-09-18 00:59:28.559812 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:59:28.559828 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:59:28.559843 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:59:28.559858 | orchestrator | 2025-09-18 00:59:28.559873 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:59:28.559887 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:00.265) 0:00:00.515 **** 2025-09-18 00:59:28.559901 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-18 00:59:28.559916 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-18 00:59:28.559932 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-18 00:59:28.559947 | orchestrator | 2025-09-18 00:59:28.559960 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-18 00:59:28.559973 | orchestrator | 2025-09-18 00:59:28.559987 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 00:59:28.560002 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:00.348) 0:00:00.864 **** 2025-09-18 00:59:28.560015 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:59:28.560030 | orchestrator | 2025-09-18 00:59:28.560045 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-18 00:59:28.560064 | orchestrator | Thursday 18 September 2025 00:56:44 +0000 (0:00:00.741) 0:00:01.605 **** 2025-09-18 00:59:28.560081 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-18 00:59:28.560095 | orchestrator | 2025-09-18 00:59:28.560109 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-18 00:59:28.560122 | orchestrator | Thursday 18 September 2025 00:56:48 +0000 (0:00:04.027) 0:00:05.633 **** 2025-09-18 00:59:28.560136 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-18 00:59:28.560185 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-18 00:59:28.560200 | orchestrator | 2025-09-18 00:59:28.560213 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-18 00:59:28.560226 | orchestrator | Thursday 18 September 2025 00:56:54 +0000 (0:00:05.985) 0:00:11.619 **** 2025-09-18 00:59:28.560238 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-18 00:59:28.560251 | orchestrator | 2025-09-18 00:59:28.560263 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-18 00:59:28.560276 | orchestrator | Thursday 18 September 2025 00:56:57 +0000 (0:00:02.923) 0:00:14.542 **** 2025-09-18 00:59:28.560289 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 00:59:28.560301 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-18 00:59:28.560315 | orchestrator | 2025-09-18 00:59:28.560328 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-18 00:59:28.560341 | orchestrator | Thursday 18 September 2025 00:57:00 +0000 (0:00:03.632) 0:00:18.174 **** 2025-09-18 00:59:28.560354 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 00:59:28.560367 | orchestrator | 2025-09-18 00:59:28.560381 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-18 00:59:28.560394 | orchestrator | Thursday 18 September 2025 00:57:04 +0000 (0:00:03.214) 0:00:21.389 **** 2025-09-18 00:59:28.560408 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-18 00:59:28.560421 | orchestrator | 2025-09-18 00:59:28.560435 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-18 00:59:28.560448 | orchestrator | Thursday 18 September 2025 00:57:08 +0000 (0:00:04.293) 0:00:25.683 **** 2025-09-18 00:59:28.560575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.560609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.560642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.560656 | orchestrator | 2025-09-18 00:59:28.560667 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 00:59:28.560678 | orchestrator | Thursday 18 September 2025 00:57:14 +0000 (0:00:06.361) 0:00:32.045 **** 2025-09-18 00:59:28.560689 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:59:28.560701 | orchestrator | 2025-09-18 00:59:28.560722 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-18 00:59:28.560733 | orchestrator | Thursday 18 September 2025 00:57:15 +0000 (0:00:00.549) 0:00:32.594 **** 2025-09-18 00:59:28.560744 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.560756 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:28.560767 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:28.560778 | orchestrator | 2025-09-18 00:59:28.560788 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-18 00:59:28.560800 | orchestrator | Thursday 18 September 2025 00:57:18 +0000 (0:00:03.102) 0:00:35.697 **** 2025-09-18 00:59:28.560820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560832 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560854 | orchestrator | 2025-09-18 00:59:28.560865 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-18 00:59:28.560875 | orchestrator | Thursday 18 September 2025 00:57:19 +0000 (0:00:01.494) 0:00:37.191 **** 2025-09-18 00:59:28.560885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 00:59:28.560927 | orchestrator | 2025-09-18 00:59:28.560938 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-18 00:59:28.560949 | orchestrator | Thursday 18 September 2025 00:57:21 +0000 (0:00:01.196) 0:00:38.387 **** 2025-09-18 00:59:28.560961 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:59:28.560971 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:59:28.560982 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:59:28.560991 | orchestrator | 2025-09-18 00:59:28.561001 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-18 00:59:28.561013 | orchestrator | Thursday 18 September 2025 00:57:21 +0000 (0:00:00.660) 0:00:39.048 **** 2025-09-18 00:59:28.561023 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.561036 | orchestrator | 2025-09-18 00:59:28.561047 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-18 00:59:28.561058 | orchestrator | Thursday 18 September 2025 00:57:22 +0000 (0:00:00.263) 0:00:39.312 **** 2025-09-18 00:59:28.561069 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.561080 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.561092 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.561103 | orchestrator | 2025-09-18 00:59:28.561114 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 00:59:28.561125 | orchestrator | Thursday 18 September 2025 00:57:22 +0000 (0:00:00.267) 0:00:39.579 **** 2025-09-18 00:59:28.561136 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 00:59:28.561148 | orchestrator | 2025-09-18 00:59:28.561160 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-18 00:59:28.561171 | orchestrator | Thursday 18 September 2025 00:57:22 +0000 (0:00:00.493) 0:00:40.072 **** 2025-09-18 00:59:28.561195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561260 | orchestrator | 2025-09-18 00:59:28.561272 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-18 00:59:28.561282 | orchestrator | Thursday 18 September 2025 00:57:26 +0000 (0:00:03.724) 0:00:43.797 **** 2025-09-18 00:59:28.561302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561321 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.561335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561347 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.561368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561389 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.561400 | orchestrator | 2025-09-18 00:59:28.561412 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-18 00:59:28.561423 | orchestrator | Thursday 18 September 2025 00:57:30 +0000 (0:00:04.197) 0:00:47.994 **** 2025-09-18 00:59:28.561440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561454 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.561473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561493 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.561510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 00:59:28.561548 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.561709 | orchestrator | 2025-09-18 00:59:28.561726 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-18 00:59:28.561738 | orchestrator | Thursday 18 September 2025 00:57:34 +0000 (0:00:03.813) 0:00:51.808 **** 2025-09-18 00:59:28.561750 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.561761 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.561772 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.561784 | orchestrator | 2025-09-18 00:59:28.561795 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-18 00:59:28.561807 | orchestrator | Thursday 18 September 2025 00:57:38 +0000 (0:00:03.811) 0:00:55.619 **** 2025-09-18 00:59:28.561830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.561905 | orchestrator | 2025-09-18 00:59:28.561916 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-18 00:59:28.561927 | orchestrator | Thursday 18 September 2025 00:57:42 +0000 (0:00:04.046) 0:00:59.666 **** 2025-09-18 00:59:28.561937 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:28.561948 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.561959 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:28.561971 | orchestrator | 2025-09-18 00:59:28.561981 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-18 00:59:28.561992 | orchestrator | Thursday 18 September 2025 00:57:49 +0000 (0:00:06.764) 0:01:06.430 **** 2025-09-18 00:59:28.562002 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562012 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562088 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562099 | orchestrator | 2025-09-18 00:59:28.562109 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-18 00:59:28.562131 | orchestrator | Thursday 18 September 2025 00:57:54 +0000 (0:00:05.125) 0:01:11.556 **** 2025-09-18 00:59:28.562141 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562154 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562166 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562178 | orchestrator | 2025-09-18 00:59:28.562190 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-18 00:59:28.562201 | orchestrator | Thursday 18 September 2025 00:57:58 +0000 (0:00:04.441) 0:01:15.998 **** 2025-09-18 00:59:28.562211 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562222 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562233 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562245 | orchestrator | 2025-09-18 00:59:28.562257 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-18 00:59:28.562268 | orchestrator | Thursday 18 September 2025 00:58:04 +0000 (0:00:05.716) 0:01:21.714 **** 2025-09-18 00:59:28.562280 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562293 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562305 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562317 | orchestrator | 2025-09-18 00:59:28.562331 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-18 00:59:28.562344 | orchestrator | Thursday 18 September 2025 00:58:10 +0000 (0:00:05.656) 0:01:27.371 **** 2025-09-18 00:59:28.562355 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562370 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562382 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562394 | orchestrator | 2025-09-18 00:59:28.562406 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-18 00:59:28.562419 | orchestrator | Thursday 18 September 2025 00:58:10 +0000 (0:00:00.307) 0:01:27.678 **** 2025-09-18 00:59:28.562442 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 00:59:28.562458 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562471 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 00:59:28.562496 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562509 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 00:59:28.562559 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562572 | orchestrator | 2025-09-18 00:59:28.562584 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-18 00:59:28.562595 | orchestrator | Thursday 18 September 2025 00:58:13 +0000 (0:00:03.406) 0:01:31.085 **** 2025-09-18 00:59:28.562612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.562644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.562667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 00:59:28.562691 | orchestrator | 2025-09-18 00:59:28.562704 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 00:59:28.562717 | orchestrator | Thursday 18 September 2025 00:58:18 +0000 (0:00:04.218) 0:01:35.303 **** 2025-09-18 00:59:28.562730 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:28.562743 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:28.562754 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:28.562766 | orchestrator | 2025-09-18 00:59:28.562778 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-18 00:59:28.562789 | orchestrator | Thursday 18 September 2025 00:58:18 +0000 (0:00:00.270) 0:01:35.574 **** 2025-09-18 00:59:28.562800 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.562812 | orchestrator | 2025-09-18 00:59:28.562823 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-18 00:59:28.562834 | orchestrator | Thursday 18 September 2025 00:58:20 +0000 (0:00:02.281) 0:01:37.855 **** 2025-09-18 00:59:28.562843 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.562852 | orchestrator | 2025-09-18 00:59:28.562861 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-18 00:59:28.562871 | orchestrator | Thursday 18 September 2025 00:58:22 +0000 (0:00:02.312) 0:01:40.168 **** 2025-09-18 00:59:28.562880 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.562889 | orchestrator | 2025-09-18 00:59:28.562899 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-18 00:59:28.562908 | orchestrator | Thursday 18 September 2025 00:58:25 +0000 (0:00:02.141) 0:01:42.310 **** 2025-09-18 00:59:28.562918 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.562929 | orchestrator | 2025-09-18 00:59:28.562939 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-18 00:59:28.562949 | orchestrator | Thursday 18 September 2025 00:58:53 +0000 (0:00:28.450) 0:02:10.760 **** 2025-09-18 00:59:28.562959 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.562970 | orchestrator | 2025-09-18 00:59:28.562991 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 00:59:28.563002 | orchestrator | Thursday 18 September 2025 00:58:55 +0000 (0:00:02.257) 0:02:13.018 **** 2025-09-18 00:59:28.563012 | orchestrator | 2025-09-18 00:59:28.563023 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 00:59:28.563044 | orchestrator | Thursday 18 September 2025 00:58:55 +0000 (0:00:00.059) 0:02:13.077 **** 2025-09-18 00:59:28.563055 | orchestrator | 2025-09-18 00:59:28.563065 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 00:59:28.563075 | orchestrator | Thursday 18 September 2025 00:58:55 +0000 (0:00:00.064) 0:02:13.141 **** 2025-09-18 00:59:28.563085 | orchestrator | 2025-09-18 00:59:28.563096 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-18 00:59:28.563107 | orchestrator | Thursday 18 September 2025 00:58:55 +0000 (0:00:00.067) 0:02:13.208 **** 2025-09-18 00:59:28.563117 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:28.563128 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:28.563139 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:28.563149 | orchestrator | 2025-09-18 00:59:28.563158 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:59:28.563170 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 00:59:28.563182 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 00:59:28.563199 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 00:59:28.563210 | orchestrator | 2025-09-18 00:59:28.563220 | orchestrator | 2025-09-18 00:59:28.563229 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:59:28.563240 | orchestrator | Thursday 18 September 2025 00:59:27 +0000 (0:00:31.647) 0:02:44.856 **** 2025-09-18 00:59:28.563250 | orchestrator | =============================================================================== 2025-09-18 00:59:28.563259 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.65s 2025-09-18 00:59:28.563269 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.45s 2025-09-18 00:59:28.563279 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.76s 2025-09-18 00:59:28.563289 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.36s 2025-09-18 00:59:28.563299 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.99s 2025-09-18 00:59:28.563309 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.72s 2025-09-18 00:59:28.563319 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.66s 2025-09-18 00:59:28.563330 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.13s 2025-09-18 00:59:28.563341 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.44s 2025-09-18 00:59:28.563350 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.29s 2025-09-18 00:59:28.563359 | orchestrator | glance : Check glance containers ---------------------------------------- 4.22s 2025-09-18 00:59:28.563369 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.20s 2025-09-18 00:59:28.563380 | orchestrator | glance : Copying over config.json files for services -------------------- 4.05s 2025-09-18 00:59:28.563391 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.03s 2025-09-18 00:59:28.563402 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.81s 2025-09-18 00:59:28.563412 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.81s 2025-09-18 00:59:28.563423 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.72s 2025-09-18 00:59:28.563433 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.63s 2025-09-18 00:59:28.563443 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.41s 2025-09-18 00:59:28.563452 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.21s 2025-09-18 00:59:28.563472 | orchestrator | 2025-09-18 00:59:28 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:28.566254 | orchestrator | 2025-09-18 00:59:28 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:28.568028 | orchestrator | 2025-09-18 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:31.620892 | orchestrator | 2025-09-18 00:59:31 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:31.622245 | orchestrator | 2025-09-18 00:59:31 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:31.624074 | orchestrator | 2025-09-18 00:59:31 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:31.625848 | orchestrator | 2025-09-18 00:59:31 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:31.626167 | orchestrator | 2025-09-18 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:34.671571 | orchestrator | 2025-09-18 00:59:34 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:34.673834 | orchestrator | 2025-09-18 00:59:34 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:34.676188 | orchestrator | 2025-09-18 00:59:34 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:34.677005 | orchestrator | 2025-09-18 00:59:34 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:34.677031 | orchestrator | 2025-09-18 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:37.714914 | orchestrator | 2025-09-18 00:59:37 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:37.716294 | orchestrator | 2025-09-18 00:59:37 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:37.717057 | orchestrator | 2025-09-18 00:59:37 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:37.718705 | orchestrator | 2025-09-18 00:59:37 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:37.718734 | orchestrator | 2025-09-18 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:40.755176 | orchestrator | 2025-09-18 00:59:40 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:40.755785 | orchestrator | 2025-09-18 00:59:40 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:40.756634 | orchestrator | 2025-09-18 00:59:40 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:40.757451 | orchestrator | 2025-09-18 00:59:40 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:40.757474 | orchestrator | 2025-09-18 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:43.814305 | orchestrator | 2025-09-18 00:59:43 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:43.817485 | orchestrator | 2025-09-18 00:59:43 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state STARTED 2025-09-18 00:59:43.820377 | orchestrator | 2025-09-18 00:59:43 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:43.822567 | orchestrator | 2025-09-18 00:59:43 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:43.822596 | orchestrator | 2025-09-18 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:46.887168 | orchestrator | 2025-09-18 00:59:46 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 00:59:46.889433 | orchestrator | 2025-09-18 00:59:46 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:46.893478 | orchestrator | 2025-09-18 00:59:46 | INFO  | Task 4c89681b-c9a6-4a9c-8e6a-bd61466a287a is in state SUCCESS 2025-09-18 00:59:46.895494 | orchestrator | 2025-09-18 00:59:46.895569 | orchestrator | 2025-09-18 00:59:46.895583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 00:59:46.895743 | orchestrator | 2025-09-18 00:59:46.895760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 00:59:46.895835 | orchestrator | Thursday 18 September 2025 00:56:36 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-18 00:59:46.895849 | orchestrator | ok: [testbed-manager] 2025-09-18 00:59:46.895861 | orchestrator | ok: [testbed-node-0] 2025-09-18 00:59:46.895873 | orchestrator | ok: [testbed-node-1] 2025-09-18 00:59:46.895883 | orchestrator | ok: [testbed-node-2] 2025-09-18 00:59:46.895894 | orchestrator | ok: [testbed-node-3] 2025-09-18 00:59:46.895905 | orchestrator | ok: [testbed-node-4] 2025-09-18 00:59:46.895915 | orchestrator | ok: [testbed-node-5] 2025-09-18 00:59:46.895926 | orchestrator | 2025-09-18 00:59:46.895937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 00:59:46.895948 | orchestrator | Thursday 18 September 2025 00:56:37 +0000 (0:00:00.952) 0:00:01.212 **** 2025-09-18 00:59:46.895959 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-18 00:59:46.895970 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-18 00:59:46.895981 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-18 00:59:46.895992 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-18 00:59:46.896003 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-18 00:59:46.896014 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-18 00:59:46.896025 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-18 00:59:46.896459 | orchestrator | 2025-09-18 00:59:46.896499 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-18 00:59:46.896511 | orchestrator | 2025-09-18 00:59:46.896522 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-18 00:59:46.896555 | orchestrator | Thursday 18 September 2025 00:56:38 +0000 (0:00:00.695) 0:00:01.908 **** 2025-09-18 00:59:46.896568 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:59:46.896580 | orchestrator | 2025-09-18 00:59:46.896591 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-18 00:59:46.896602 | orchestrator | Thursday 18 September 2025 00:56:39 +0000 (0:00:01.318) 0:00:03.226 **** 2025-09-18 00:59:46.896617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.896631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.896660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.896688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.896714 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:59:46.896728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.896740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.896752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.896764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.896781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.897150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.897168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.897522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.897604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.897634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897647 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:59:46.897694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.897720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.897732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.897743 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.897767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.898445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.898460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.898600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.898619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.898631 | orchestrator | 2025-09-18 00:59:46.898643 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-18 00:59:46.898654 | orchestrator | Thursday 18 September 2025 00:56:43 +0000 (0:00:03.767) 0:00:06.994 **** 2025-09-18 00:59:46.898665 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 00:59:46.898677 | orchestrator | 2025-09-18 00:59:46.898688 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-18 00:59:46.898699 | orchestrator | Thursday 18 September 2025 00:56:45 +0000 (0:00:01.561) 0:00:08.555 **** 2025-09-18 00:59:46.898710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:59:46.898744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.898926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.898956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.898973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.898984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:59:46.899139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899213 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.899293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.899355 | orchestrator | 2025-09-18 00:59:46.899381 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-18 00:59:46.899399 | orchestrator | Thursday 18 September 2025 00:56:50 +0000 (0:00:05.692) 0:00:14.247 **** 2025-09-18 00:59:46.899409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 00:59:46.899507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899524 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899555 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 00:59:46.899571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899673 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.899684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899695 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.899707 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.899718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899832 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.899843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.899855 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.899866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899906 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.899917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.899928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.899995 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.900006 | orchestrator | 2025-09-18 00:59:46.900018 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-18 00:59:46.900030 | orchestrator | Thursday 18 September 2025 00:56:52 +0000 (0:00:01.447) 0:00:15.694 **** 2025-09-18 00:59:46.900041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900097 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.900108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 00:59:46.900152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900165 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 00:59:46.900186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900196 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.900211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900331 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.900341 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.900355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 00:59:46.900440 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.900450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900485 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.900495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 00:59:46.900511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 00:59:46.900618 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.900628 | orchestrator | 2025-09-18 00:59:46.900638 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-18 00:59:46.900648 | orchestrator | Thursday 18 September 2025 00:56:54 +0000 (0:00:02.186) 0:00:17.880 **** 2025-09-18 00:59:46.900658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900668 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:59:46.900679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.900793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900841 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.900931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900960 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:59:46.900971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.900996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.901013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.901021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.901030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.901038 | orchestrator | 2025-09-18 00:59:46.901046 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-18 00:59:46.901054 | orchestrator | Thursday 18 September 2025 00:57:00 +0000 (0:00:05.949) 0:00:23.829 **** 2025-09-18 00:59:46.901062 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:59:46.901070 | orchestrator | 2025-09-18 00:59:46.901078 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-18 00:59:46.901107 | orchestrator | Thursday 18 September 2025 00:57:01 +0000 (0:00:01.043) 0:00:24.873 **** 2025-09-18 00:59:46.901117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901126 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901135 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901149 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901161 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901199 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901209 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901217 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.901225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901242 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901254 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901262 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090497, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8975687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901292 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901301 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901323 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901331 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901344 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901382 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901415 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901423 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901435 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901444 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901482 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090523, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9034824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.901505 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901514 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901540 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901549 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901558 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901589 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901623 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901631 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901644 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901652 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901661 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901691 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901716 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901724 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090489, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8967705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.901745 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901753 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901783 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901797 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901806 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901814 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901826 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901834 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901843 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901872 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901887 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901896 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901904 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901918 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901927 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901935 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090516, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.901971 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.901997 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902010 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902044 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902052 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902100 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902109 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902125 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902138 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090485, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902146 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902165 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902174 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902182 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902190 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902219 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090499, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8978517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902238 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902247 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902255 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902263 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902283 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902298 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902311 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902320 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902336 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902357 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902370 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.902379 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902391 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090514, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.900419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902399 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902408 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902416 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902424 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902436 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902464 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902473 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902481 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902498 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902511 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.902541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090502, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.898124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902550 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902558 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.902570 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902579 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.902587 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902604 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.902612 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 00:59:46.902620 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.902637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090495, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8971508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090522, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.902797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902654 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090469, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.891533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090541, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.908594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902675 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090520, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9024394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902683 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090488, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8958292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902691 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090474, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8952703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902704 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090511, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8997893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090505, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.899096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090538, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.9069726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 00:59:46.902733 | orchestrator | 2025-09-18 00:59:46.902741 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-18 00:59:46.902749 | orchestrator | Thursday 18 September 2025 00:57:24 +0000 (0:00:23.171) 0:00:48.044 **** 2025-09-18 00:59:46.902757 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:59:46.902765 | orchestrator | 2025-09-18 00:59:46.902776 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-18 00:59:46.902785 | orchestrator | Thursday 18 September 2025 00:57:25 +0000 (0:00:00.623) 0:00:48.668 **** 2025-09-18 00:59:46.902793 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.902801 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902810 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.902818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902826 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.902834 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:59:46.902842 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.902849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902857 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.902865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902873 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.902881 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.902889 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902897 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.902904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902912 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.902920 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.902933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902941 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.902949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902956 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.902964 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.902972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902980 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.902988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.902995 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.903003 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.903011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.903019 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.903027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.903035 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.903043 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.903050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.903058 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-18 00:59:46.903066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 00:59:46.903074 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-18 00:59:46.903082 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 00:59:46.903089 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 00:59:46.903097 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 00:59:46.903105 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 00:59:46.903112 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 00:59:46.903120 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 00:59:46.903128 | orchestrator | 2025-09-18 00:59:46.903140 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-18 00:59:46.903148 | orchestrator | Thursday 18 September 2025 00:57:27 +0000 (0:00:02.254) 0:00:50.922 **** 2025-09-18 00:59:46.903156 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903164 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903172 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903180 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903187 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903195 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903203 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903211 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903219 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903227 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903235 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 00:59:46.903242 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903250 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-18 00:59:46.903258 | orchestrator | 2025-09-18 00:59:46.903266 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-18 00:59:46.903274 | orchestrator | Thursday 18 September 2025 00:57:43 +0000 (0:00:16.140) 0:01:07.063 **** 2025-09-18 00:59:46.903287 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903299 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903307 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903314 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903322 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903330 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903338 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903346 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903353 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903361 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903369 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 00:59:46.903377 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903385 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-18 00:59:46.903392 | orchestrator | 2025-09-18 00:59:46.903400 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-18 00:59:46.903408 | orchestrator | Thursday 18 September 2025 00:57:47 +0000 (0:00:04.127) 0:01:11.191 **** 2025-09-18 00:59:46.903416 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903424 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903432 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903440 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903448 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903455 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903463 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903471 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903479 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903486 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903494 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 00:59:46.903502 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903510 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-18 00:59:46.903517 | orchestrator | 2025-09-18 00:59:46.903564 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-18 00:59:46.903573 | orchestrator | Thursday 18 September 2025 00:57:50 +0000 (0:00:02.215) 0:01:13.406 **** 2025-09-18 00:59:46.903581 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:59:46.903589 | orchestrator | 2025-09-18 00:59:46.903597 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-18 00:59:46.903605 | orchestrator | Thursday 18 September 2025 00:57:51 +0000 (0:00:00.954) 0:01:14.360 **** 2025-09-18 00:59:46.903613 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.903621 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903633 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903641 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903649 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903663 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903670 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903678 | orchestrator | 2025-09-18 00:59:46.903686 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-18 00:59:46.903694 | orchestrator | Thursday 18 September 2025 00:57:52 +0000 (0:00:01.148) 0:01:15.508 **** 2025-09-18 00:59:46.903702 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.903710 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903718 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903725 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903733 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.903741 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.903748 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.903756 | orchestrator | 2025-09-18 00:59:46.903764 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-18 00:59:46.903772 | orchestrator | Thursday 18 September 2025 00:57:55 +0000 (0:00:02.869) 0:01:18.378 **** 2025-09-18 00:59:46.903780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903788 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903796 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903803 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903811 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.903819 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903826 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903834 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903846 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903853 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903860 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903867 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903874 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 00:59:46.903880 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.903887 | orchestrator | 2025-09-18 00:59:46.903894 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-18 00:59:46.903900 | orchestrator | Thursday 18 September 2025 00:57:57 +0000 (0:00:02.771) 0:01:21.150 **** 2025-09-18 00:59:46.903907 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903914 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.903921 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903927 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903934 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.903941 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.903948 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903954 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.903961 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-18 00:59:46.903968 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903975 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.903982 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 00:59:46.903993 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.904000 | orchestrator | 2025-09-18 00:59:46.904007 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-18 00:59:46.904013 | orchestrator | Thursday 18 September 2025 00:57:59 +0000 (0:00:02.087) 0:01:23.237 **** 2025-09-18 00:59:46.904020 | orchestrator | [WARNING]: Skipped 2025-09-18 00:59:46.904027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-18 00:59:46.904034 | orchestrator | due to this access issue: 2025-09-18 00:59:46.904040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-18 00:59:46.904047 | orchestrator | not a directory 2025-09-18 00:59:46.904054 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 00:59:46.904061 | orchestrator | 2025-09-18 00:59:46.904067 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-18 00:59:46.904074 | orchestrator | Thursday 18 September 2025 00:58:02 +0000 (0:00:02.478) 0:01:25.716 **** 2025-09-18 00:59:46.904081 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.904088 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.904094 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.904101 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.904108 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.904114 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.904121 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.904128 | orchestrator | 2025-09-18 00:59:46.904134 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-18 00:59:46.904141 | orchestrator | Thursday 18 September 2025 00:58:03 +0000 (0:00:01.396) 0:01:27.112 **** 2025-09-18 00:59:46.904148 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.904158 | orchestrator | skipping: [testbed-node-0] 2025-09-18 00:59:46.904165 | orchestrator | skipping: [testbed-node-1] 2025-09-18 00:59:46.904171 | orchestrator | skipping: [testbed-node-2] 2025-09-18 00:59:46.904178 | orchestrator | skipping: [testbed-node-3] 2025-09-18 00:59:46.904185 | orchestrator | skipping: [testbed-node-4] 2025-09-18 00:59:46.904191 | orchestrator | skipping: [testbed-node-5] 2025-09-18 00:59:46.904198 | orchestrator | 2025-09-18 00:59:46.904204 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-18 00:59:46.904211 | orchestrator | Thursday 18 September 2025 00:58:04 +0000 (0:00:00.759) 0:01:27.872 **** 2025-09-18 00:59:46.904218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 00:59:46.904237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904273 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 00:59:46.904323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904368 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 00:59:46.904381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 00:59:46.904448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 00:59:46.904468 | orchestrator | 2025-09-18 00:59:46.904475 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-18 00:59:46.904482 | orchestrator | Thursday 18 September 2025 00:58:10 +0000 (0:00:05.964) 0:01:33.837 **** 2025-09-18 00:59:46.904488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 00:59:46.904495 | orchestrator | skipping: [testbed-manager] 2025-09-18 00:59:46.904502 | orchestrator | 2025-09-18 00:59:46.904509 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904515 | orchestrator | Thursday 18 September 2025 00:58:11 +0000 (0:00:01.233) 0:01:35.070 **** 2025-09-18 00:59:46.904522 | orchestrator | 2025-09-18 00:59:46.904540 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904547 | orchestrator | Thursday 18 September 2025 00:58:11 +0000 (0:00:00.055) 0:01:35.126 **** 2025-09-18 00:59:46.904554 | orchestrator | 2025-09-18 00:59:46.904560 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904567 | orchestrator | Thursday 18 September 2025 00:58:11 +0000 (0:00:00.054) 0:01:35.180 **** 2025-09-18 00:59:46.904574 | orchestrator | 2025-09-18 00:59:46.904580 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904587 | orchestrator | Thursday 18 September 2025 00:58:11 +0000 (0:00:00.052) 0:01:35.233 **** 2025-09-18 00:59:46.904593 | orchestrator | 2025-09-18 00:59:46.904600 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904611 | orchestrator | Thursday 18 September 2025 00:58:12 +0000 (0:00:00.300) 0:01:35.533 **** 2025-09-18 00:59:46.904618 | orchestrator | 2025-09-18 00:59:46.904625 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904631 | orchestrator | Thursday 18 September 2025 00:58:12 +0000 (0:00:00.156) 0:01:35.690 **** 2025-09-18 00:59:46.904638 | orchestrator | 2025-09-18 00:59:46.904644 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 00:59:46.904651 | orchestrator | Thursday 18 September 2025 00:58:12 +0000 (0:00:00.156) 0:01:35.846 **** 2025-09-18 00:59:46.904658 | orchestrator | 2025-09-18 00:59:46.904664 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-18 00:59:46.904671 | orchestrator | Thursday 18 September 2025 00:58:12 +0000 (0:00:00.139) 0:01:35.985 **** 2025-09-18 00:59:46.904677 | orchestrator | changed: [testbed-manager] 2025-09-18 00:59:46.904684 | orchestrator | 2025-09-18 00:59:46.904691 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-18 00:59:46.904700 | orchestrator | Thursday 18 September 2025 00:58:31 +0000 (0:00:18.847) 0:01:54.833 **** 2025-09-18 00:59:46.904707 | orchestrator | changed: [testbed-manager] 2025-09-18 00:59:46.904714 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:59:46.904721 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.904727 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:59:46.904764 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:59:46.904772 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.904779 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.904785 | orchestrator | 2025-09-18 00:59:46.904792 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-18 00:59:46.904798 | orchestrator | Thursday 18 September 2025 00:58:39 +0000 (0:00:08.283) 0:02:03.116 **** 2025-09-18 00:59:46.904805 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.904812 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.904818 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.904825 | orchestrator | 2025-09-18 00:59:46.904832 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-18 00:59:46.904838 | orchestrator | Thursday 18 September 2025 00:58:49 +0000 (0:00:09.665) 0:02:12.782 **** 2025-09-18 00:59:46.904845 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.904851 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.904858 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.904864 | orchestrator | 2025-09-18 00:59:46.904871 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-18 00:59:46.904878 | orchestrator | Thursday 18 September 2025 00:58:59 +0000 (0:00:09.892) 0:02:22.674 **** 2025-09-18 00:59:46.904884 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.904891 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:59:46.904897 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.904904 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.904910 | orchestrator | changed: [testbed-manager] 2025-09-18 00:59:46.904917 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:59:46.904923 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:59:46.904930 | orchestrator | 2025-09-18 00:59:46.904936 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-18 00:59:46.904943 | orchestrator | Thursday 18 September 2025 00:59:14 +0000 (0:00:14.814) 0:02:37.488 **** 2025-09-18 00:59:46.904950 | orchestrator | changed: [testbed-manager] 2025-09-18 00:59:46.904956 | orchestrator | 2025-09-18 00:59:46.904963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-18 00:59:46.904970 | orchestrator | Thursday 18 September 2025 00:59:20 +0000 (0:00:06.742) 0:02:44.230 **** 2025-09-18 00:59:46.904976 | orchestrator | changed: [testbed-node-0] 2025-09-18 00:59:46.904983 | orchestrator | changed: [testbed-node-2] 2025-09-18 00:59:46.904989 | orchestrator | changed: [testbed-node-1] 2025-09-18 00:59:46.905000 | orchestrator | 2025-09-18 00:59:46.905007 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-18 00:59:46.905014 | orchestrator | Thursday 18 September 2025 00:59:32 +0000 (0:00:11.540) 0:02:55.771 **** 2025-09-18 00:59:46.905020 | orchestrator | changed: [testbed-manager] 2025-09-18 00:59:46.905027 | orchestrator | 2025-09-18 00:59:46.905034 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-18 00:59:46.905040 | orchestrator | Thursday 18 September 2025 00:59:38 +0000 (0:00:06.529) 0:03:02.301 **** 2025-09-18 00:59:46.905047 | orchestrator | changed: [testbed-node-3] 2025-09-18 00:59:46.905053 | orchestrator | changed: [testbed-node-4] 2025-09-18 00:59:46.905060 | orchestrator | changed: [testbed-node-5] 2025-09-18 00:59:46.905067 | orchestrator | 2025-09-18 00:59:46.905073 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 00:59:46.905080 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 00:59:46.905087 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 00:59:46.905097 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 00:59:46.905104 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 00:59:46.905110 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 00:59:46.905117 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 00:59:46.905123 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 00:59:46.905130 | orchestrator | 2025-09-18 00:59:46.905136 | orchestrator | 2025-09-18 00:59:46.905143 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 00:59:46.905150 | orchestrator | Thursday 18 September 2025 00:59:45 +0000 (0:00:06.299) 0:03:08.601 **** 2025-09-18 00:59:46.905156 | orchestrator | =============================================================================== 2025-09-18 00:59:46.905163 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.17s 2025-09-18 00:59:46.905169 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.85s 2025-09-18 00:59:46.905176 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.14s 2025-09-18 00:59:46.905182 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.81s 2025-09-18 00:59:46.905189 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.54s 2025-09-18 00:59:46.905199 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.89s 2025-09-18 00:59:46.905206 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.67s 2025-09-18 00:59:46.905212 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.28s 2025-09-18 00:59:46.905219 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.74s 2025-09-18 00:59:46.905225 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.53s 2025-09-18 00:59:46.905232 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.30s 2025-09-18 00:59:46.905238 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.96s 2025-09-18 00:59:46.905245 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.95s 2025-09-18 00:59:46.905251 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.69s 2025-09-18 00:59:46.905262 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.13s 2025-09-18 00:59:46.905269 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.77s 2025-09-18 00:59:46.905276 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.87s 2025-09-18 00:59:46.905282 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.77s 2025-09-18 00:59:46.905289 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.48s 2025-09-18 00:59:46.905295 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.25s 2025-09-18 00:59:46.905302 | orchestrator | 2025-09-18 00:59:46 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:46.905309 | orchestrator | 2025-09-18 00:59:46 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:46.905315 | orchestrator | 2025-09-18 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:49.956096 | orchestrator | 2025-09-18 00:59:49 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 00:59:49.957964 | orchestrator | 2025-09-18 00:59:49 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:49.959735 | orchestrator | 2025-09-18 00:59:49 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:49.961197 | orchestrator | 2025-09-18 00:59:49 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:49.961398 | orchestrator | 2025-09-18 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:53.017865 | orchestrator | 2025-09-18 00:59:53 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 00:59:53.018331 | orchestrator | 2025-09-18 00:59:53 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:53.021987 | orchestrator | 2025-09-18 00:59:53 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:53.026584 | orchestrator | 2025-09-18 00:59:53 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:53.027769 | orchestrator | 2025-09-18 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:56.096130 | orchestrator | 2025-09-18 00:59:56 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 00:59:56.096264 | orchestrator | 2025-09-18 00:59:56 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:56.096958 | orchestrator | 2025-09-18 00:59:56 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:56.098780 | orchestrator | 2025-09-18 00:59:56 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:56.099014 | orchestrator | 2025-09-18 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 00:59:59.138930 | orchestrator | 2025-09-18 00:59:59 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 00:59:59.141344 | orchestrator | 2025-09-18 00:59:59 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 00:59:59.143014 | orchestrator | 2025-09-18 00:59:59 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 00:59:59.144376 | orchestrator | 2025-09-18 00:59:59 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 00:59:59.144479 | orchestrator | 2025-09-18 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:02.191697 | orchestrator | 2025-09-18 01:00:02 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:02.197412 | orchestrator | 2025-09-18 01:00:02 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:02.201041 | orchestrator | 2025-09-18 01:00:02 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:02.203029 | orchestrator | 2025-09-18 01:00:02 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:02.203058 | orchestrator | 2025-09-18 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:05.250951 | orchestrator | 2025-09-18 01:00:05 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:05.252457 | orchestrator | 2025-09-18 01:00:05 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:05.253977 | orchestrator | 2025-09-18 01:00:05 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:05.255409 | orchestrator | 2025-09-18 01:00:05 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:05.255435 | orchestrator | 2025-09-18 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:08.301657 | orchestrator | 2025-09-18 01:00:08 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:08.303165 | orchestrator | 2025-09-18 01:00:08 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:08.305024 | orchestrator | 2025-09-18 01:00:08 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:08.307734 | orchestrator | 2025-09-18 01:00:08 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:08.307813 | orchestrator | 2025-09-18 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:11.346824 | orchestrator | 2025-09-18 01:00:11 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:11.348161 | orchestrator | 2025-09-18 01:00:11 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:11.351338 | orchestrator | 2025-09-18 01:00:11 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:11.353882 | orchestrator | 2025-09-18 01:00:11 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:11.354382 | orchestrator | 2025-09-18 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:14.379460 | orchestrator | 2025-09-18 01:00:14 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:14.380200 | orchestrator | 2025-09-18 01:00:14 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:14.381180 | orchestrator | 2025-09-18 01:00:14 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:14.382178 | orchestrator | 2025-09-18 01:00:14 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:14.382343 | orchestrator | 2025-09-18 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:17.424497 | orchestrator | 2025-09-18 01:00:17 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:17.425968 | orchestrator | 2025-09-18 01:00:17 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:17.426744 | orchestrator | 2025-09-18 01:00:17 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:17.428345 | orchestrator | 2025-09-18 01:00:17 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:17.428404 | orchestrator | 2025-09-18 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:20.461219 | orchestrator | 2025-09-18 01:00:20 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:20.462369 | orchestrator | 2025-09-18 01:00:20 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:20.464303 | orchestrator | 2025-09-18 01:00:20 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:20.465408 | orchestrator | 2025-09-18 01:00:20 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:20.465581 | orchestrator | 2025-09-18 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:23.490258 | orchestrator | 2025-09-18 01:00:23 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:23.491451 | orchestrator | 2025-09-18 01:00:23 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:23.491480 | orchestrator | 2025-09-18 01:00:23 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:23.491983 | orchestrator | 2025-09-18 01:00:23 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:23.492085 | orchestrator | 2025-09-18 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:26.530682 | orchestrator | 2025-09-18 01:00:26 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:26.531116 | orchestrator | 2025-09-18 01:00:26 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:26.532032 | orchestrator | 2025-09-18 01:00:26 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:26.532788 | orchestrator | 2025-09-18 01:00:26 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:26.532814 | orchestrator | 2025-09-18 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:29.572944 | orchestrator | 2025-09-18 01:00:29 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:29.575513 | orchestrator | 2025-09-18 01:00:29 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:29.577790 | orchestrator | 2025-09-18 01:00:29 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state STARTED 2025-09-18 01:00:29.579621 | orchestrator | 2025-09-18 01:00:29 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:29.579674 | orchestrator | 2025-09-18 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:32.624800 | orchestrator | 2025-09-18 01:00:32 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:32.625839 | orchestrator | 2025-09-18 01:00:32 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:32.630369 | orchestrator | 2025-09-18 01:00:32 | INFO  | Task 1f7c2d51-f1b0-4d1d-b162-75d4b59fb809 is in state SUCCESS 2025-09-18 01:00:32.631812 | orchestrator | 2025-09-18 01:00:32.631846 | orchestrator | 2025-09-18 01:00:32.631859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:00:32.631871 | orchestrator | 2025-09-18 01:00:32.631882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:00:32.631893 | orchestrator | Thursday 18 September 2025 00:56:53 +0000 (0:00:00.273) 0:00:00.273 **** 2025-09-18 01:00:32.631905 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:00:32.631916 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:00:32.631927 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:00:32.632412 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:00:32.632432 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:00:32.632442 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:00:32.632453 | orchestrator | 2025-09-18 01:00:32.632464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:00:32.632475 | orchestrator | Thursday 18 September 2025 00:56:54 +0000 (0:00:00.672) 0:00:00.946 **** 2025-09-18 01:00:32.632486 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-18 01:00:32.632498 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-18 01:00:32.632508 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-18 01:00:32.632519 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-18 01:00:32.632530 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-18 01:00:32.632559 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-18 01:00:32.632571 | orchestrator | 2025-09-18 01:00:32.632595 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-18 01:00:32.632884 | orchestrator | 2025-09-18 01:00:32.632898 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 01:00:32.632909 | orchestrator | Thursday 18 September 2025 00:56:54 +0000 (0:00:00.574) 0:00:01.520 **** 2025-09-18 01:00:32.632921 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:00:32.632933 | orchestrator | 2025-09-18 01:00:32.632944 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-18 01:00:32.632955 | orchestrator | Thursday 18 September 2025 00:56:56 +0000 (0:00:01.994) 0:00:03.514 **** 2025-09-18 01:00:32.632966 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-18 01:00:32.632977 | orchestrator | 2025-09-18 01:00:32.632988 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-18 01:00:32.632999 | orchestrator | Thursday 18 September 2025 00:56:59 +0000 (0:00:02.953) 0:00:06.468 **** 2025-09-18 01:00:32.633010 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-18 01:00:32.633021 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-18 01:00:32.633032 | orchestrator | 2025-09-18 01:00:32.633043 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-18 01:00:32.633054 | orchestrator | Thursday 18 September 2025 00:57:06 +0000 (0:00:06.709) 0:00:13.178 **** 2025-09-18 01:00:32.633065 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:00:32.633076 | orchestrator | 2025-09-18 01:00:32.633087 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-18 01:00:32.633098 | orchestrator | Thursday 18 September 2025 00:57:10 +0000 (0:00:03.591) 0:00:16.769 **** 2025-09-18 01:00:32.633109 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:00:32.633120 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-18 01:00:32.633131 | orchestrator | 2025-09-18 01:00:32.633142 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-18 01:00:32.633153 | orchestrator | Thursday 18 September 2025 00:57:14 +0000 (0:00:04.326) 0:00:21.096 **** 2025-09-18 01:00:32.633164 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:00:32.633175 | orchestrator | 2025-09-18 01:00:32.633186 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-18 01:00:32.633197 | orchestrator | Thursday 18 September 2025 00:57:18 +0000 (0:00:03.647) 0:00:24.744 **** 2025-09-18 01:00:32.633208 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-18 01:00:32.633219 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-18 01:00:32.633230 | orchestrator | 2025-09-18 01:00:32.633241 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-18 01:00:32.633265 | orchestrator | Thursday 18 September 2025 00:57:26 +0000 (0:00:08.576) 0:00:33.320 **** 2025-09-18 01:00:32.633279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.633331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.633352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.633366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.633525 | orchestrator | 2025-09-18 01:00:32.633586 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 01:00:32.633601 | orchestrator | Thursday 18 September 2025 00:57:29 +0000 (0:00:03.149) 0:00:36.469 **** 2025-09-18 01:00:32.633615 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.633628 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.633641 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.633653 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.633667 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.633680 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.633693 | orchestrator | 2025-09-18 01:00:32.633709 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 01:00:32.633729 | orchestrator | Thursday 18 September 2025 00:57:30 +0000 (0:00:00.783) 0:00:37.253 **** 2025-09-18 01:00:32.633742 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.633755 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.633772 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.633787 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:00:32.633801 | orchestrator | 2025-09-18 01:00:32.633813 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-18 01:00:32.633944 | orchestrator | Thursday 18 September 2025 00:57:31 +0000 (0:00:00.815) 0:00:38.069 **** 2025-09-18 01:00:32.633963 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-18 01:00:32.633975 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-18 01:00:32.633985 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-18 01:00:32.633996 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-18 01:00:32.634007 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-18 01:00:32.634061 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-18 01:00:32.634076 | orchestrator | 2025-09-18 01:00:32.634087 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-18 01:00:32.634097 | orchestrator | Thursday 18 September 2025 00:57:33 +0000 (0:00:02.237) 0:00:40.306 **** 2025-09-18 01:00:32.634110 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634133 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634145 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634195 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634214 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634226 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 01:00:32.634245 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634256 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634297 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634315 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634327 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634345 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 01:00:32.634357 | orchestrator | 2025-09-18 01:00:32.634369 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-18 01:00:32.634380 | orchestrator | Thursday 18 September 2025 00:57:37 +0000 (0:00:03.740) 0:00:44.047 **** 2025-09-18 01:00:32.634391 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 01:00:32.634402 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 01:00:32.634413 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 01:00:32.634424 | orchestrator | 2025-09-18 01:00:32.634435 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-18 01:00:32.634446 | orchestrator | Thursday 18 September 2025 00:57:39 +0000 (0:00:02.066) 0:00:46.113 **** 2025-09-18 01:00:32.634457 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-18 01:00:32.634468 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-18 01:00:32.634478 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-18 01:00:32.634489 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 01:00:32.634500 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 01:00:32.634608 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 01:00:32.634626 | orchestrator | 2025-09-18 01:00:32.634640 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-18 01:00:32.634653 | orchestrator | Thursday 18 September 2025 00:57:42 +0000 (0:00:02.978) 0:00:49.092 **** 2025-09-18 01:00:32.634666 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-18 01:00:32.634679 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-18 01:00:32.634692 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-18 01:00:32.634704 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-18 01:00:32.634718 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-18 01:00:32.634730 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-18 01:00:32.634742 | orchestrator | 2025-09-18 01:00:32.634753 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-18 01:00:32.634764 | orchestrator | Thursday 18 September 2025 00:57:43 +0000 (0:00:01.156) 0:00:50.248 **** 2025-09-18 01:00:32.634774 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.634791 | orchestrator | 2025-09-18 01:00:32.634801 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-18 01:00:32.634810 | orchestrator | Thursday 18 September 2025 00:57:43 +0000 (0:00:00.154) 0:00:50.402 **** 2025-09-18 01:00:32.634820 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.634829 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.634848 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.634858 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.634868 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.634877 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.634887 | orchestrator | 2025-09-18 01:00:32.634896 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 01:00:32.634906 | orchestrator | Thursday 18 September 2025 00:57:44 +0000 (0:00:01.084) 0:00:51.487 **** 2025-09-18 01:00:32.634917 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:00:32.634927 | orchestrator | 2025-09-18 01:00:32.634936 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-18 01:00:32.634946 | orchestrator | Thursday 18 September 2025 00:57:46 +0000 (0:00:01.456) 0:00:52.944 **** 2025-09-18 01:00:32.634957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.634967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.635010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.635031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635166 | orchestrator | 2025-09-18 01:00:32.635176 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-18 01:00:32.635186 | orchestrator | Thursday 18 September 2025 00:57:49 +0000 (0:00:03.107) 0:00:56.052 **** 2025-09-18 01:00:32.635196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635227 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.635238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635262 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.635272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635293 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.635303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635336 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.635350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635370 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.635381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635406 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.635416 | orchestrator | 2025-09-18 01:00:32.635426 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-18 01:00:32.635436 | orchestrator | Thursday 18 September 2025 00:57:50 +0000 (0:00:01.546) 0:00:57.598 **** 2025-09-18 01:00:32.635451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635476 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.635487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635507 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.635517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635579 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.635594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.635604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635614 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.635624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635650 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.635665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.635689 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.635699 | orchestrator | 2025-09-18 01:00:32.635709 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-18 01:00:32.635719 | orchestrator | Thursday 18 September 2025 00:57:53 +0000 (0:00:02.160) 0:00:59.758 **** 2025-09-18 01:00:32.635729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.635740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.635758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.635774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.635886 | orchestrator | 2025-09-18 01:00:32.635896 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-18 01:00:32.635905 | orchestrator | Thursday 18 September 2025 00:57:56 +0000 (0:00:03.235) 0:01:02.994 **** 2025-09-18 01:00:32.635915 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 01:00:32.635930 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.635940 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 01:00:32.635950 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.635960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 01:00:32.635969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 01:00:32.635979 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 01:00:32.635989 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.635999 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 01:00:32.636008 | orchestrator | 2025-09-18 01:00:32.636018 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-18 01:00:32.636028 | orchestrator | Thursday 18 September 2025 00:57:58 +0000 (0:00:02.285) 0:01:05.280 **** 2025-09-18 01:00:32.636038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636192 | orchestrator | 2025-09-18 01:00:32.636201 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-18 01:00:32.636211 | orchestrator | Thursday 18 September 2025 00:58:09 +0000 (0:00:10.920) 0:01:16.200 **** 2025-09-18 01:00:32.636226 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.636236 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.636246 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.636255 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:00:32.636265 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:00:32.636274 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:00:32.636284 | orchestrator | 2025-09-18 01:00:32.636293 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-18 01:00:32.636303 | orchestrator | Thursday 18 September 2025 00:58:11 +0000 (0:00:01.981) 0:01:18.182 **** 2025-09-18 01:00:32.636317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.636334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.636355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 01:00:32.636380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636389 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.636399 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.636409 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.636423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636448 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.636458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636479 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.636494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 01:00:32.636524 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.636547 | orchestrator | 2025-09-18 01:00:32.636557 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-18 01:00:32.636567 | orchestrator | Thursday 18 September 2025 00:58:13 +0000 (0:00:01.588) 0:01:19.771 **** 2025-09-18 01:00:32.636577 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.636586 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.636596 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.636605 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.636615 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.636625 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.636634 | orchestrator | 2025-09-18 01:00:32.636644 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-18 01:00:32.636654 | orchestrator | Thursday 18 September 2025 00:58:13 +0000 (0:00:00.568) 0:01:20.339 **** 2025-09-18 01:00:32.636664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 01:00:32.636716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 01:00:32.636826 | orchestrator | 2025-09-18 01:00:32.636835 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 01:00:32.636845 | orchestrator | Thursday 18 September 2025 00:58:16 +0000 (0:00:02.864) 0:01:23.204 **** 2025-09-18 01:00:32.636855 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.636865 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:00:32.636874 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:00:32.636884 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:00:32.636893 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:00:32.636903 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:00:32.636912 | orchestrator | 2025-09-18 01:00:32.636922 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-18 01:00:32.636931 | orchestrator | Thursday 18 September 2025 00:58:17 +0000 (0:00:00.581) 0:01:23.786 **** 2025-09-18 01:00:32.636941 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:00:32.636951 | orchestrator | 2025-09-18 01:00:32.636960 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-18 01:00:32.636970 | orchestrator | Thursday 18 September 2025 00:58:19 +0000 (0:00:02.425) 0:01:26.211 **** 2025-09-18 01:00:32.636986 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:00:32.636995 | orchestrator | 2025-09-18 01:00:32.637005 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-18 01:00:32.637014 | orchestrator | Thursday 18 September 2025 00:58:21 +0000 (0:00:02.251) 0:01:28.462 **** 2025-09-18 01:00:32.637024 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:00:32.637033 | orchestrator | 2025-09-18 01:00:32.637043 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637053 | orchestrator | Thursday 18 September 2025 00:58:42 +0000 (0:00:20.936) 0:01:49.399 **** 2025-09-18 01:00:32.637062 | orchestrator | 2025-09-18 01:00:32.637076 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637086 | orchestrator | Thursday 18 September 2025 00:58:42 +0000 (0:00:00.083) 0:01:49.482 **** 2025-09-18 01:00:32.637095 | orchestrator | 2025-09-18 01:00:32.637105 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637115 | orchestrator | Thursday 18 September 2025 00:58:42 +0000 (0:00:00.063) 0:01:49.546 **** 2025-09-18 01:00:32.637124 | orchestrator | 2025-09-18 01:00:32.637134 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637144 | orchestrator | Thursday 18 September 2025 00:58:42 +0000 (0:00:00.069) 0:01:49.615 **** 2025-09-18 01:00:32.637153 | orchestrator | 2025-09-18 01:00:32.637163 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637172 | orchestrator | Thursday 18 September 2025 00:58:43 +0000 (0:00:00.064) 0:01:49.680 **** 2025-09-18 01:00:32.637182 | orchestrator | 2025-09-18 01:00:32.637192 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 01:00:32.637201 | orchestrator | Thursday 18 September 2025 00:58:43 +0000 (0:00:00.064) 0:01:49.744 **** 2025-09-18 01:00:32.637211 | orchestrator | 2025-09-18 01:00:32.637220 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-18 01:00:32.637230 | orchestrator | Thursday 18 September 2025 00:58:43 +0000 (0:00:00.065) 0:01:49.809 **** 2025-09-18 01:00:32.637243 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:00:32.637253 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:00:32.637263 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:00:32.637273 | orchestrator | 2025-09-18 01:00:32.637282 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-18 01:00:32.637292 | orchestrator | Thursday 18 September 2025 00:59:08 +0000 (0:00:25.741) 0:02:15.551 **** 2025-09-18 01:00:32.637302 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:00:32.637311 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:00:32.637321 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:00:32.637330 | orchestrator | 2025-09-18 01:00:32.637340 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-18 01:00:32.637350 | orchestrator | Thursday 18 September 2025 00:59:19 +0000 (0:00:10.654) 0:02:26.206 **** 2025-09-18 01:00:32.637359 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:00:32.637369 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:00:32.637379 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:00:32.637388 | orchestrator | 2025-09-18 01:00:32.637398 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-18 01:00:32.637407 | orchestrator | Thursday 18 September 2025 01:00:19 +0000 (0:01:00.159) 0:03:26.365 **** 2025-09-18 01:00:32.637417 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:00:32.637426 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:00:32.637436 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:00:32.637446 | orchestrator | 2025-09-18 01:00:32.637456 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-18 01:00:32.637465 | orchestrator | Thursday 18 September 2025 01:00:30 +0000 (0:00:10.358) 0:03:36.724 **** 2025-09-18 01:00:32.637475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:00:32.637485 | orchestrator | 2025-09-18 01:00:32.637494 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:00:32.637510 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 01:00:32.637520 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-18 01:00:32.637530 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-18 01:00:32.637553 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 01:00:32.637564 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 01:00:32.637573 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 01:00:32.637583 | orchestrator | 2025-09-18 01:00:32.637593 | orchestrator | 2025-09-18 01:00:32.637602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:00:32.637612 | orchestrator | Thursday 18 September 2025 01:00:31 +0000 (0:00:01.291) 0:03:38.015 **** 2025-09-18 01:00:32.637622 | orchestrator | =============================================================================== 2025-09-18 01:00:32.637631 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 60.16s 2025-09-18 01:00:32.637641 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.74s 2025-09-18 01:00:32.637651 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.94s 2025-09-18 01:00:32.637660 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.92s 2025-09-18 01:00:32.637670 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.65s 2025-09-18 01:00:32.637680 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.36s 2025-09-18 01:00:32.637689 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.58s 2025-09-18 01:00:32.637699 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.71s 2025-09-18 01:00:32.637713 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.33s 2025-09-18 01:00:32.637723 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.74s 2025-09-18 01:00:32.637732 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.65s 2025-09-18 01:00:32.637742 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.59s 2025-09-18 01:00:32.637751 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.24s 2025-09-18 01:00:32.637761 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.15s 2025-09-18 01:00:32.637770 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.11s 2025-09-18 01:00:32.637780 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.98s 2025-09-18 01:00:32.637789 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.95s 2025-09-18 01:00:32.637802 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.87s 2025-09-18 01:00:32.637812 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.43s 2025-09-18 01:00:32.637822 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.29s 2025-09-18 01:00:32.637835 | orchestrator | 2025-09-18 01:00:32 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:32.637845 | orchestrator | 2025-09-18 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:35.659409 | orchestrator | 2025-09-18 01:00:35 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:35.659639 | orchestrator | 2025-09-18 01:00:35 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:35.660267 | orchestrator | 2025-09-18 01:00:35 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:35.660922 | orchestrator | 2025-09-18 01:00:35 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:35.660968 | orchestrator | 2025-09-18 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:38.693173 | orchestrator | 2025-09-18 01:00:38 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:38.693705 | orchestrator | 2025-09-18 01:00:38 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:38.694660 | orchestrator | 2025-09-18 01:00:38 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:38.695233 | orchestrator | 2025-09-18 01:00:38 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:38.695254 | orchestrator | 2025-09-18 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:41.716754 | orchestrator | 2025-09-18 01:00:41 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:41.717207 | orchestrator | 2025-09-18 01:00:41 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:41.717922 | orchestrator | 2025-09-18 01:00:41 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:41.718676 | orchestrator | 2025-09-18 01:00:41 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:41.718703 | orchestrator | 2025-09-18 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:44.740829 | orchestrator | 2025-09-18 01:00:44 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:44.741927 | orchestrator | 2025-09-18 01:00:44 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:44.743521 | orchestrator | 2025-09-18 01:00:44 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:44.744570 | orchestrator | 2025-09-18 01:00:44 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:44.744601 | orchestrator | 2025-09-18 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:47.784144 | orchestrator | 2025-09-18 01:00:47 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:47.784745 | orchestrator | 2025-09-18 01:00:47 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:47.789987 | orchestrator | 2025-09-18 01:00:47 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:47.790502 | orchestrator | 2025-09-18 01:00:47 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:47.790525 | orchestrator | 2025-09-18 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:50.817335 | orchestrator | 2025-09-18 01:00:50 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:50.817736 | orchestrator | 2025-09-18 01:00:50 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:50.819083 | orchestrator | 2025-09-18 01:00:50 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:50.819701 | orchestrator | 2025-09-18 01:00:50 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:50.819821 | orchestrator | 2025-09-18 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:53.840116 | orchestrator | 2025-09-18 01:00:53 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:53.840303 | orchestrator | 2025-09-18 01:00:53 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:53.840986 | orchestrator | 2025-09-18 01:00:53 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:53.842443 | orchestrator | 2025-09-18 01:00:53 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:53.842562 | orchestrator | 2025-09-18 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:56.867456 | orchestrator | 2025-09-18 01:00:56 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:56.867953 | orchestrator | 2025-09-18 01:00:56 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:56.868380 | orchestrator | 2025-09-18 01:00:56 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:56.870373 | orchestrator | 2025-09-18 01:00:56 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:56.870415 | orchestrator | 2025-09-18 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:00:59.895479 | orchestrator | 2025-09-18 01:00:59 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:00:59.896030 | orchestrator | 2025-09-18 01:00:59 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:00:59.896522 | orchestrator | 2025-09-18 01:00:59 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:00:59.897300 | orchestrator | 2025-09-18 01:00:59 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:00:59.897328 | orchestrator | 2025-09-18 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:02.923509 | orchestrator | 2025-09-18 01:01:02 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:02.923743 | orchestrator | 2025-09-18 01:01:02 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:02.924528 | orchestrator | 2025-09-18 01:01:02 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:02.925007 | orchestrator | 2025-09-18 01:01:02 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:02.925037 | orchestrator | 2025-09-18 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:05.947679 | orchestrator | 2025-09-18 01:01:05 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:05.948059 | orchestrator | 2025-09-18 01:01:05 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:05.948612 | orchestrator | 2025-09-18 01:01:05 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:05.949853 | orchestrator | 2025-09-18 01:01:05 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:05.949876 | orchestrator | 2025-09-18 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:08.980020 | orchestrator | 2025-09-18 01:01:08 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:08.980704 | orchestrator | 2025-09-18 01:01:08 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:08.981355 | orchestrator | 2025-09-18 01:01:08 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:08.982058 | orchestrator | 2025-09-18 01:01:08 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:08.982089 | orchestrator | 2025-09-18 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:12.004957 | orchestrator | 2025-09-18 01:01:12 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:12.006310 | orchestrator | 2025-09-18 01:01:12 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:12.006781 | orchestrator | 2025-09-18 01:01:12 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:12.007392 | orchestrator | 2025-09-18 01:01:12 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:12.007414 | orchestrator | 2025-09-18 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:15.027588 | orchestrator | 2025-09-18 01:01:15 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:15.028066 | orchestrator | 2025-09-18 01:01:15 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:15.030159 | orchestrator | 2025-09-18 01:01:15 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:15.030715 | orchestrator | 2025-09-18 01:01:15 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:15.030955 | orchestrator | 2025-09-18 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:18.063883 | orchestrator | 2025-09-18 01:01:18 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:18.064081 | orchestrator | 2025-09-18 01:01:18 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:18.064689 | orchestrator | 2025-09-18 01:01:18 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:18.065402 | orchestrator | 2025-09-18 01:01:18 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:18.065586 | orchestrator | 2025-09-18 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:21.093779 | orchestrator | 2025-09-18 01:01:21 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:21.094144 | orchestrator | 2025-09-18 01:01:21 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:21.094774 | orchestrator | 2025-09-18 01:01:21 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:21.095697 | orchestrator | 2025-09-18 01:01:21 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:21.095739 | orchestrator | 2025-09-18 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:24.118224 | orchestrator | 2025-09-18 01:01:24 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:24.118831 | orchestrator | 2025-09-18 01:01:24 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:24.120176 | orchestrator | 2025-09-18 01:01:24 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:24.121115 | orchestrator | 2025-09-18 01:01:24 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:24.121138 | orchestrator | 2025-09-18 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:27.146149 | orchestrator | 2025-09-18 01:01:27 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:27.146755 | orchestrator | 2025-09-18 01:01:27 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:27.147523 | orchestrator | 2025-09-18 01:01:27 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:27.148313 | orchestrator | 2025-09-18 01:01:27 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:27.148328 | orchestrator | 2025-09-18 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:30.176214 | orchestrator | 2025-09-18 01:01:30 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:30.176626 | orchestrator | 2025-09-18 01:01:30 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:30.177047 | orchestrator | 2025-09-18 01:01:30 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:30.177617 | orchestrator | 2025-09-18 01:01:30 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:30.177729 | orchestrator | 2025-09-18 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:33.199339 | orchestrator | 2025-09-18 01:01:33 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:33.199521 | orchestrator | 2025-09-18 01:01:33 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:33.200551 | orchestrator | 2025-09-18 01:01:33 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:33.201191 | orchestrator | 2025-09-18 01:01:33 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:33.201214 | orchestrator | 2025-09-18 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:36.226810 | orchestrator | 2025-09-18 01:01:36 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:36.229166 | orchestrator | 2025-09-18 01:01:36 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:36.231956 | orchestrator | 2025-09-18 01:01:36 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:36.232802 | orchestrator | 2025-09-18 01:01:36 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:36.232833 | orchestrator | 2025-09-18 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:39.258305 | orchestrator | 2025-09-18 01:01:39 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:39.258820 | orchestrator | 2025-09-18 01:01:39 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:39.259414 | orchestrator | 2025-09-18 01:01:39 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:39.261147 | orchestrator | 2025-09-18 01:01:39 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:39.261991 | orchestrator | 2025-09-18 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:42.289094 | orchestrator | 2025-09-18 01:01:42 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:42.290276 | orchestrator | 2025-09-18 01:01:42 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state STARTED 2025-09-18 01:01:42.292168 | orchestrator | 2025-09-18 01:01:42 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:42.292929 | orchestrator | 2025-09-18 01:01:42 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:42.292995 | orchestrator | 2025-09-18 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:45.321401 | orchestrator | 2025-09-18 01:01:45 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:45.322565 | orchestrator | 2025-09-18 01:01:45 | INFO  | Task a75cedc0-4a5b-46c9-9c60-fb9c3d0bafab is in state SUCCESS 2025-09-18 01:01:45.322730 | orchestrator | 2025-09-18 01:01:45.323934 | orchestrator | 2025-09-18 01:01:45.323963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:01:45.323976 | orchestrator | 2025-09-18 01:01:45.323988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:01:45.324000 | orchestrator | Thursday 18 September 2025 00:59:49 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-18 01:01:45.324012 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:01:45.324025 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:01:45.324037 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:01:45.324049 | orchestrator | 2025-09-18 01:01:45.324061 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:01:45.324073 | orchestrator | Thursday 18 September 2025 00:59:49 +0000 (0:00:00.279) 0:00:00.543 **** 2025-09-18 01:01:45.324085 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-18 01:01:45.324096 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-18 01:01:45.324108 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-18 01:01:45.324120 | orchestrator | 2025-09-18 01:01:45.324132 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-18 01:01:45.324143 | orchestrator | 2025-09-18 01:01:45.324155 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 01:01:45.324167 | orchestrator | Thursday 18 September 2025 00:59:50 +0000 (0:00:00.404) 0:00:00.947 **** 2025-09-18 01:01:45.324179 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:01:45.324191 | orchestrator | 2025-09-18 01:01:45.324615 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-18 01:01:45.324630 | orchestrator | Thursday 18 September 2025 00:59:50 +0000 (0:00:00.527) 0:00:01.474 **** 2025-09-18 01:01:45.324642 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-18 01:01:45.324653 | orchestrator | 2025-09-18 01:01:45.324664 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-18 01:01:45.324676 | orchestrator | Thursday 18 September 2025 00:59:53 +0000 (0:00:02.940) 0:00:04.415 **** 2025-09-18 01:01:45.324687 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-18 01:01:45.324698 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-18 01:01:45.324710 | orchestrator | 2025-09-18 01:01:45.324721 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-18 01:01:45.324732 | orchestrator | Thursday 18 September 2025 00:59:59 +0000 (0:00:05.937) 0:00:10.352 **** 2025-09-18 01:01:45.324743 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:01:45.324754 | orchestrator | 2025-09-18 01:01:45.324765 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-18 01:01:45.324776 | orchestrator | Thursday 18 September 2025 01:00:03 +0000 (0:00:03.753) 0:00:14.106 **** 2025-09-18 01:01:45.324787 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:01:45.324798 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-18 01:01:45.324809 | orchestrator | 2025-09-18 01:01:45.324820 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-18 01:01:45.324831 | orchestrator | Thursday 18 September 2025 01:00:07 +0000 (0:00:04.012) 0:00:18.119 **** 2025-09-18 01:01:45.324842 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:01:45.324875 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-18 01:01:45.324887 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-18 01:01:45.324898 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-18 01:01:45.324909 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-18 01:01:45.324920 | orchestrator | 2025-09-18 01:01:45.324931 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-18 01:01:45.324954 | orchestrator | Thursday 18 September 2025 01:00:23 +0000 (0:00:15.927) 0:00:34.047 **** 2025-09-18 01:01:45.324965 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-18 01:01:45.324977 | orchestrator | 2025-09-18 01:01:45.324987 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-18 01:01:45.324998 | orchestrator | Thursday 18 September 2025 01:00:27 +0000 (0:00:03.594) 0:00:37.641 **** 2025-09-18 01:01:45.325013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325310 | orchestrator | 2025-09-18 01:01:45.325322 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-18 01:01:45.325333 | orchestrator | Thursday 18 September 2025 01:00:28 +0000 (0:00:01.535) 0:00:39.177 **** 2025-09-18 01:01:45.325345 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-18 01:01:45.325356 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-18 01:01:45.325367 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-18 01:01:45.325385 | orchestrator | 2025-09-18 01:01:45.325397 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-18 01:01:45.325408 | orchestrator | Thursday 18 September 2025 01:00:29 +0000 (0:00:01.228) 0:00:40.405 **** 2025-09-18 01:01:45.325418 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.325430 | orchestrator | 2025-09-18 01:01:45.325441 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-18 01:01:45.325451 | orchestrator | Thursday 18 September 2025 01:00:29 +0000 (0:00:00.104) 0:00:40.510 **** 2025-09-18 01:01:45.325462 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.325473 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.325484 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.325495 | orchestrator | 2025-09-18 01:01:45.325528 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 01:01:45.325540 | orchestrator | Thursday 18 September 2025 01:00:30 +0000 (0:00:00.863) 0:00:41.373 **** 2025-09-18 01:01:45.325551 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:01:45.325562 | orchestrator | 2025-09-18 01:01:45.325573 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-18 01:01:45.325584 | orchestrator | Thursday 18 September 2025 01:00:31 +0000 (0:00:00.941) 0:00:42.315 **** 2025-09-18 01:01:45.325601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.325653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.325741 | orchestrator | 2025-09-18 01:01:45.325752 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-18 01:01:45.325764 | orchestrator | Thursday 18 September 2025 01:00:35 +0000 (0:00:03.343) 0:00:45.659 **** 2025-09-18 01:01:45.325775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.325787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325815 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.325833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.325845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325877 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.325892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.325910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.325936 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.325949 | orchestrator | 2025-09-18 01:01:45.325962 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-18 01:01:45.325976 | orchestrator | Thursday 18 September 2025 01:00:37 +0000 (0:00:02.626) 0:00:48.286 **** 2025-09-18 01:01:45.325996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326093 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.326111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326146 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.326166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326207 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.326218 | orchestrator | 2025-09-18 01:01:45.326229 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-18 01:01:45.326240 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:01.018) 0:00:49.304 **** 2025-09-18 01:01:45.326257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326389 | orchestrator | 2025-09-18 01:01:45.326400 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-18 01:01:45.326411 | orchestrator | Thursday 18 September 2025 01:00:42 +0000 (0:00:03.441) 0:00:52.746 **** 2025-09-18 01:01:45.326422 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.326433 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:01:45.326444 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:01:45.326455 | orchestrator | 2025-09-18 01:01:45.326466 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-18 01:01:45.326477 | orchestrator | Thursday 18 September 2025 01:00:44 +0000 (0:00:02.721) 0:00:55.467 **** 2025-09-18 01:01:45.326489 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:01:45.326517 | orchestrator | 2025-09-18 01:01:45.326529 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-18 01:01:45.326540 | orchestrator | Thursday 18 September 2025 01:00:46 +0000 (0:00:01.324) 0:00:56.792 **** 2025-09-18 01:01:45.326551 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.326562 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.326573 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.326584 | orchestrator | 2025-09-18 01:01:45.326595 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-18 01:01:45.326606 | orchestrator | Thursday 18 September 2025 01:00:46 +0000 (0:00:00.724) 0:00:57.516 **** 2025-09-18 01:01:45.326617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.326671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.326750 | orchestrator | 2025-09-18 01:01:45.326762 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-18 01:01:45.326773 | orchestrator | Thursday 18 September 2025 01:00:55 +0000 (0:00:08.539) 0:01:06.055 **** 2025-09-18 01:01:45.326791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326826 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.326841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326887 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.326899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 01:01:45.326910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:01:45.326933 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.326950 | orchestrator | 2025-09-18 01:01:45.326961 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-18 01:01:45.326972 | orchestrator | Thursday 18 September 2025 01:00:56 +0000 (0:00:01.292) 0:01:07.348 **** 2025-09-18 01:01:45.326988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.327007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.327019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 01:01:45.327030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:01:45.327120 | orchestrator | 2025-09-18 01:01:45.327132 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 01:01:45.327143 | orchestrator | Thursday 18 September 2025 01:00:59 +0000 (0:00:03.217) 0:01:10.566 **** 2025-09-18 01:01:45.327154 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:01:45.327165 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:01:45.327177 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:01:45.327187 | orchestrator | 2025-09-18 01:01:45.327198 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-18 01:01:45.327209 | orchestrator | Thursday 18 September 2025 01:01:00 +0000 (0:00:00.370) 0:01:10.936 **** 2025-09-18 01:01:45.327220 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327231 | orchestrator | 2025-09-18 01:01:45.327242 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-18 01:01:45.327253 | orchestrator | Thursday 18 September 2025 01:01:02 +0000 (0:00:02.281) 0:01:13.218 **** 2025-09-18 01:01:45.327264 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327275 | orchestrator | 2025-09-18 01:01:45.327286 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-18 01:01:45.327303 | orchestrator | Thursday 18 September 2025 01:01:05 +0000 (0:00:02.715) 0:01:15.933 **** 2025-09-18 01:01:45.327314 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327325 | orchestrator | 2025-09-18 01:01:45.327336 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 01:01:45.327347 | orchestrator | Thursday 18 September 2025 01:01:17 +0000 (0:00:12.009) 0:01:27.943 **** 2025-09-18 01:01:45.327357 | orchestrator | 2025-09-18 01:01:45.327368 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 01:01:45.327379 | orchestrator | Thursday 18 September 2025 01:01:17 +0000 (0:00:00.156) 0:01:28.100 **** 2025-09-18 01:01:45.327390 | orchestrator | 2025-09-18 01:01:45.327401 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 01:01:45.327412 | orchestrator | Thursday 18 September 2025 01:01:17 +0000 (0:00:00.134) 0:01:28.234 **** 2025-09-18 01:01:45.327423 | orchestrator | 2025-09-18 01:01:45.327433 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-18 01:01:45.327444 | orchestrator | Thursday 18 September 2025 01:01:17 +0000 (0:00:00.077) 0:01:28.312 **** 2025-09-18 01:01:45.327455 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327466 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:01:45.327477 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:01:45.327487 | orchestrator | 2025-09-18 01:01:45.327498 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-18 01:01:45.327529 | orchestrator | Thursday 18 September 2025 01:01:25 +0000 (0:00:08.008) 0:01:36.320 **** 2025-09-18 01:01:45.327540 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327551 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:01:45.327562 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:01:45.327573 | orchestrator | 2025-09-18 01:01:45.327584 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-18 01:01:45.327595 | orchestrator | Thursday 18 September 2025 01:01:31 +0000 (0:00:05.723) 0:01:42.044 **** 2025-09-18 01:01:45.327606 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:01:45.327617 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:01:45.327627 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:01:45.327638 | orchestrator | 2025-09-18 01:01:45.327649 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:01:45.327661 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:01:45.327672 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:01:45.327683 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:01:45.327694 | orchestrator | 2025-09-18 01:01:45.327705 | orchestrator | 2025-09-18 01:01:45.327716 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:01:45.327727 | orchestrator | Thursday 18 September 2025 01:01:43 +0000 (0:00:11.629) 0:01:53.673 **** 2025-09-18 01:01:45.327738 | orchestrator | =============================================================================== 2025-09-18 01:01:45.327749 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.93s 2025-09-18 01:01:45.327765 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.01s 2025-09-18 01:01:45.327777 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.63s 2025-09-18 01:01:45.327787 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.54s 2025-09-18 01:01:45.327798 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.01s 2025-09-18 01:01:45.327809 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.94s 2025-09-18 01:01:45.327820 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.73s 2025-09-18 01:01:45.327837 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.01s 2025-09-18 01:01:45.327848 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.75s 2025-09-18 01:01:45.327859 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.59s 2025-09-18 01:01:45.327870 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.44s 2025-09-18 01:01:45.327880 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.34s 2025-09-18 01:01:45.327891 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.22s 2025-09-18 01:01:45.327902 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 2.94s 2025-09-18 01:01:45.327913 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.72s 2025-09-18 01:01:45.327924 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.72s 2025-09-18 01:01:45.327935 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.63s 2025-09-18 01:01:45.327946 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2025-09-18 01:01:45.327957 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.54s 2025-09-18 01:01:45.327968 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.32s 2025-09-18 01:01:45.327978 | orchestrator | 2025-09-18 01:01:45 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:45.327990 | orchestrator | 2025-09-18 01:01:45 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:45.328001 | orchestrator | 2025-09-18 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:48.349559 | orchestrator | 2025-09-18 01:01:48 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:48.350062 | orchestrator | 2025-09-18 01:01:48 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:48.350700 | orchestrator | 2025-09-18 01:01:48 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:01:48.351300 | orchestrator | 2025-09-18 01:01:48 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:48.351571 | orchestrator | 2025-09-18 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:51.371808 | orchestrator | 2025-09-18 01:01:51 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:51.372867 | orchestrator | 2025-09-18 01:01:51 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:51.373681 | orchestrator | 2025-09-18 01:01:51 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:01:51.374385 | orchestrator | 2025-09-18 01:01:51 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:51.374584 | orchestrator | 2025-09-18 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:54.402756 | orchestrator | 2025-09-18 01:01:54 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:54.403004 | orchestrator | 2025-09-18 01:01:54 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:54.403746 | orchestrator | 2025-09-18 01:01:54 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:01:54.404446 | orchestrator | 2025-09-18 01:01:54 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:54.404477 | orchestrator | 2025-09-18 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:01:57.431526 | orchestrator | 2025-09-18 01:01:57 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:01:57.432052 | orchestrator | 2025-09-18 01:01:57 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:01:57.432641 | orchestrator | 2025-09-18 01:01:57 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:01:57.433523 | orchestrator | 2025-09-18 01:01:57 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:01:57.434637 | orchestrator | 2025-09-18 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:00.456161 | orchestrator | 2025-09-18 01:02:00 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:00.456252 | orchestrator | 2025-09-18 01:02:00 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:00.457292 | orchestrator | 2025-09-18 01:02:00 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:00.458805 | orchestrator | 2025-09-18 01:02:00 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:00.458830 | orchestrator | 2025-09-18 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:03.501145 | orchestrator | 2025-09-18 01:02:03 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:03.504714 | orchestrator | 2025-09-18 01:02:03 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:03.505512 | orchestrator | 2025-09-18 01:02:03 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:03.507244 | orchestrator | 2025-09-18 01:02:03 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:03.507344 | orchestrator | 2025-09-18 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:06.550695 | orchestrator | 2025-09-18 01:02:06 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:06.552728 | orchestrator | 2025-09-18 01:02:06 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:06.554842 | orchestrator | 2025-09-18 01:02:06 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:06.558231 | orchestrator | 2025-09-18 01:02:06 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:06.558257 | orchestrator | 2025-09-18 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:09.607938 | orchestrator | 2025-09-18 01:02:09 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:09.612269 | orchestrator | 2025-09-18 01:02:09 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:09.616365 | orchestrator | 2025-09-18 01:02:09 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:09.617351 | orchestrator | 2025-09-18 01:02:09 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:09.617789 | orchestrator | 2025-09-18 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:12.656352 | orchestrator | 2025-09-18 01:02:12 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:12.657873 | orchestrator | 2025-09-18 01:02:12 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:12.660073 | orchestrator | 2025-09-18 01:02:12 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:12.661671 | orchestrator | 2025-09-18 01:02:12 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:12.661942 | orchestrator | 2025-09-18 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:15.720132 | orchestrator | 2025-09-18 01:02:15 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:15.720855 | orchestrator | 2025-09-18 01:02:15 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:15.724250 | orchestrator | 2025-09-18 01:02:15 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:15.727192 | orchestrator | 2025-09-18 01:02:15 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:15.728037 | orchestrator | 2025-09-18 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:18.767318 | orchestrator | 2025-09-18 01:02:18 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:18.769802 | orchestrator | 2025-09-18 01:02:18 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:18.770832 | orchestrator | 2025-09-18 01:02:18 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:18.772187 | orchestrator | 2025-09-18 01:02:18 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:18.772219 | orchestrator | 2025-09-18 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:21.819420 | orchestrator | 2025-09-18 01:02:21 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:21.820225 | orchestrator | 2025-09-18 01:02:21 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:21.822837 | orchestrator | 2025-09-18 01:02:21 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:21.825963 | orchestrator | 2025-09-18 01:02:21 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:21.826122 | orchestrator | 2025-09-18 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:24.858913 | orchestrator | 2025-09-18 01:02:24 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:24.859453 | orchestrator | 2025-09-18 01:02:24 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:24.860756 | orchestrator | 2025-09-18 01:02:24 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:24.861887 | orchestrator | 2025-09-18 01:02:24 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:24.861985 | orchestrator | 2025-09-18 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:27.904994 | orchestrator | 2025-09-18 01:02:27 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:27.905207 | orchestrator | 2025-09-18 01:02:27 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:27.907030 | orchestrator | 2025-09-18 01:02:27 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:27.909631 | orchestrator | 2025-09-18 01:02:27 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:27.909675 | orchestrator | 2025-09-18 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:30.949171 | orchestrator | 2025-09-18 01:02:30 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:30.949290 | orchestrator | 2025-09-18 01:02:30 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:30.950189 | orchestrator | 2025-09-18 01:02:30 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:30.950983 | orchestrator | 2025-09-18 01:02:30 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:30.951378 | orchestrator | 2025-09-18 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:33.993264 | orchestrator | 2025-09-18 01:02:33 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:33.993356 | orchestrator | 2025-09-18 01:02:33 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:33.993370 | orchestrator | 2025-09-18 01:02:33 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:33.994603 | orchestrator | 2025-09-18 01:02:33 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:33.994627 | orchestrator | 2025-09-18 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:37.036573 | orchestrator | 2025-09-18 01:02:37 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:37.037125 | orchestrator | 2025-09-18 01:02:37 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:37.037761 | orchestrator | 2025-09-18 01:02:37 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state STARTED 2025-09-18 01:02:37.038645 | orchestrator | 2025-09-18 01:02:37 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:37.039115 | orchestrator | 2025-09-18 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:40.070507 | orchestrator | 2025-09-18 01:02:40 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:40.070730 | orchestrator | 2025-09-18 01:02:40 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:40.071337 | orchestrator | 2025-09-18 01:02:40 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:40.071760 | orchestrator | 2025-09-18 01:02:40 | INFO  | Task 5d861bf0-f2dc-40e6-ab93-38c3d2b52df0 is in state SUCCESS 2025-09-18 01:02:40.073294 | orchestrator | 2025-09-18 01:02:40 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:40.073319 | orchestrator | 2025-09-18 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:43.095268 | orchestrator | 2025-09-18 01:02:43 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:43.095746 | orchestrator | 2025-09-18 01:02:43 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:43.096124 | orchestrator | 2025-09-18 01:02:43 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:43.096909 | orchestrator | 2025-09-18 01:02:43 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:43.096935 | orchestrator | 2025-09-18 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:46.120405 | orchestrator | 2025-09-18 01:02:46 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:46.120625 | orchestrator | 2025-09-18 01:02:46 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:46.121146 | orchestrator | 2025-09-18 01:02:46 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:46.121624 | orchestrator | 2025-09-18 01:02:46 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:46.121675 | orchestrator | 2025-09-18 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:49.154711 | orchestrator | 2025-09-18 01:02:49 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:49.157529 | orchestrator | 2025-09-18 01:02:49 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:49.160122 | orchestrator | 2025-09-18 01:02:49 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:49.161892 | orchestrator | 2025-09-18 01:02:49 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:49.162543 | orchestrator | 2025-09-18 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:52.191889 | orchestrator | 2025-09-18 01:02:52 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:52.192592 | orchestrator | 2025-09-18 01:02:52 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:52.194226 | orchestrator | 2025-09-18 01:02:52 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:52.194964 | orchestrator | 2025-09-18 01:02:52 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:52.194992 | orchestrator | 2025-09-18 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:55.228225 | orchestrator | 2025-09-18 01:02:55 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:55.229722 | orchestrator | 2025-09-18 01:02:55 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:55.230749 | orchestrator | 2025-09-18 01:02:55 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:55.232526 | orchestrator | 2025-09-18 01:02:55 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:55.232554 | orchestrator | 2025-09-18 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:02:58.264096 | orchestrator | 2025-09-18 01:02:58 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:02:58.264879 | orchestrator | 2025-09-18 01:02:58 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:02:58.265968 | orchestrator | 2025-09-18 01:02:58 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:02:58.266709 | orchestrator | 2025-09-18 01:02:58 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:02:58.266783 | orchestrator | 2025-09-18 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:01.296943 | orchestrator | 2025-09-18 01:03:01 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:01.299355 | orchestrator | 2025-09-18 01:03:01 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:01.301361 | orchestrator | 2025-09-18 01:03:01 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:01.304445 | orchestrator | 2025-09-18 01:03:01 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:01.304679 | orchestrator | 2025-09-18 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:04.330872 | orchestrator | 2025-09-18 01:03:04 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:04.331221 | orchestrator | 2025-09-18 01:03:04 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:04.332034 | orchestrator | 2025-09-18 01:03:04 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:04.332977 | orchestrator | 2025-09-18 01:03:04 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:04.333011 | orchestrator | 2025-09-18 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:07.373648 | orchestrator | 2025-09-18 01:03:07 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:07.376337 | orchestrator | 2025-09-18 01:03:07 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:07.377176 | orchestrator | 2025-09-18 01:03:07 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:07.377822 | orchestrator | 2025-09-18 01:03:07 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:07.378004 | orchestrator | 2025-09-18 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:10.416276 | orchestrator | 2025-09-18 01:03:10 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:10.418455 | orchestrator | 2025-09-18 01:03:10 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:10.420273 | orchestrator | 2025-09-18 01:03:10 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:10.421577 | orchestrator | 2025-09-18 01:03:10 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:10.421606 | orchestrator | 2025-09-18 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:13.459172 | orchestrator | 2025-09-18 01:03:13 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:13.459260 | orchestrator | 2025-09-18 01:03:13 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:13.459276 | orchestrator | 2025-09-18 01:03:13 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:13.460531 | orchestrator | 2025-09-18 01:03:13 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:13.460556 | orchestrator | 2025-09-18 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:16.486429 | orchestrator | 2025-09-18 01:03:16 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:16.486999 | orchestrator | 2025-09-18 01:03:16 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:16.487530 | orchestrator | 2025-09-18 01:03:16 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:16.488245 | orchestrator | 2025-09-18 01:03:16 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:16.488287 | orchestrator | 2025-09-18 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:19.529287 | orchestrator | 2025-09-18 01:03:19 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:19.529971 | orchestrator | 2025-09-18 01:03:19 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:19.530902 | orchestrator | 2025-09-18 01:03:19 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:19.531840 | orchestrator | 2025-09-18 01:03:19 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:19.531864 | orchestrator | 2025-09-18 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:22.572203 | orchestrator | 2025-09-18 01:03:22 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:22.573533 | orchestrator | 2025-09-18 01:03:22 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:22.574904 | orchestrator | 2025-09-18 01:03:22 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:22.575886 | orchestrator | 2025-09-18 01:03:22 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:22.575909 | orchestrator | 2025-09-18 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:25.633797 | orchestrator | 2025-09-18 01:03:25 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:25.634189 | orchestrator | 2025-09-18 01:03:25 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:25.636206 | orchestrator | 2025-09-18 01:03:25 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:25.636775 | orchestrator | 2025-09-18 01:03:25 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state STARTED 2025-09-18 01:03:25.636808 | orchestrator | 2025-09-18 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:28.663505 | orchestrator | 2025-09-18 01:03:28 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:28.663658 | orchestrator | 2025-09-18 01:03:28 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:28.664550 | orchestrator | 2025-09-18 01:03:28 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:28.666085 | orchestrator | 2025-09-18 01:03:28 | INFO  | Task 096807d4-e853-40f5-ab97-29df67703b6b is in state SUCCESS 2025-09-18 01:03:28.671486 | orchestrator | 2025-09-18 01:03:28.671518 | orchestrator | 2025-09-18 01:03:28.671529 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-18 01:03:28.671541 | orchestrator | 2025-09-18 01:03:28.671552 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-18 01:03:28.671563 | orchestrator | Thursday 18 September 2025 01:01:48 +0000 (0:00:00.093) 0:00:00.093 **** 2025-09-18 01:03:28.671575 | orchestrator | changed: [localhost] 2025-09-18 01:03:28.671587 | orchestrator | 2025-09-18 01:03:28.671598 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-18 01:03:28.671609 | orchestrator | Thursday 18 September 2025 01:01:49 +0000 (0:00:00.802) 0:00:00.895 **** 2025-09-18 01:03:28.671620 | orchestrator | changed: [localhost] 2025-09-18 01:03:28.671631 | orchestrator | 2025-09-18 01:03:28.671642 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-18 01:03:28.671653 | orchestrator | Thursday 18 September 2025 01:02:30 +0000 (0:00:40.639) 0:00:41.534 **** 2025-09-18 01:03:28.671663 | orchestrator | changed: [localhost] 2025-09-18 01:03:28.671674 | orchestrator | 2025-09-18 01:03:28.671685 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:03:28.671696 | orchestrator | 2025-09-18 01:03:28.671706 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:03:28.671717 | orchestrator | Thursday 18 September 2025 01:02:35 +0000 (0:00:05.511) 0:00:47.046 **** 2025-09-18 01:03:28.671728 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:28.671739 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:28.671750 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:28.671760 | orchestrator | 2025-09-18 01:03:28.671771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:03:28.671782 | orchestrator | Thursday 18 September 2025 01:02:36 +0000 (0:00:00.727) 0:00:47.773 **** 2025-09-18 01:03:28.671793 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-18 01:03:28.671804 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-18 01:03:28.671814 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-18 01:03:28.671850 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-18 01:03:28.671861 | orchestrator | 2025-09-18 01:03:28.671872 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-18 01:03:28.671883 | orchestrator | skipping: no hosts matched 2025-09-18 01:03:28.671894 | orchestrator | 2025-09-18 01:03:28.671905 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:03:28.671928 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:03:28.671941 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:03:28.671953 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:03:28.671965 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:03:28.671976 | orchestrator | 2025-09-18 01:03:28.671987 | orchestrator | 2025-09-18 01:03:28.671998 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:03:28.672009 | orchestrator | Thursday 18 September 2025 01:02:37 +0000 (0:00:00.798) 0:00:48.572 **** 2025-09-18 01:03:28.672020 | orchestrator | =============================================================================== 2025-09-18 01:03:28.672031 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 40.64s 2025-09-18 01:03:28.672041 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.51s 2025-09-18 01:03:28.672052 | orchestrator | Ensure the destination directory exists --------------------------------- 0.80s 2025-09-18 01:03:28.672063 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-09-18 01:03:28.672073 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-09-18 01:03:28.672084 | orchestrator | 2025-09-18 01:03:28.672098 | orchestrator | 2025-09-18 01:03:28.672110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:03:28.672123 | orchestrator | 2025-09-18 01:03:28.672137 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:03:28.672150 | orchestrator | Thursday 18 September 2025 00:59:32 +0000 (0:00:00.253) 0:00:00.253 **** 2025-09-18 01:03:28.672162 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:28.672175 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:28.672188 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:28.672200 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:03:28.672212 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:03:28.672224 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:03:28.672237 | orchestrator | 2025-09-18 01:03:28.672250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:03:28.672263 | orchestrator | Thursday 18 September 2025 00:59:32 +0000 (0:00:00.706) 0:00:00.960 **** 2025-09-18 01:03:28.672275 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-18 01:03:28.672288 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-18 01:03:28.672300 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-18 01:03:28.672313 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-18 01:03:28.672326 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-18 01:03:28.672338 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-18 01:03:28.672351 | orchestrator | 2025-09-18 01:03:28.672363 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-18 01:03:28.672394 | orchestrator | 2025-09-18 01:03:28.672407 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 01:03:28.672420 | orchestrator | Thursday 18 September 2025 00:59:33 +0000 (0:00:00.645) 0:00:01.605 **** 2025-09-18 01:03:28.672444 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:03:28.672464 | orchestrator | 2025-09-18 01:03:28.672475 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-18 01:03:28.672486 | orchestrator | Thursday 18 September 2025 00:59:34 +0000 (0:00:01.243) 0:00:02.849 **** 2025-09-18 01:03:28.672497 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:28.672508 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:28.672519 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:28.672530 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:03:28.672541 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:03:28.672552 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:03:28.672563 | orchestrator | 2025-09-18 01:03:28.672574 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-18 01:03:28.672585 | orchestrator | Thursday 18 September 2025 00:59:36 +0000 (0:00:01.243) 0:00:04.093 **** 2025-09-18 01:03:28.672595 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:28.672606 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:28.672617 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:28.672627 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:03:28.672638 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:03:28.672649 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:03:28.672660 | orchestrator | 2025-09-18 01:03:28.672671 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-18 01:03:28.672681 | orchestrator | Thursday 18 September 2025 00:59:37 +0000 (0:00:01.167) 0:00:05.261 **** 2025-09-18 01:03:28.672692 | orchestrator | ok: [testbed-node-0] => { 2025-09-18 01:03:28.672703 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672714 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672725 | orchestrator | } 2025-09-18 01:03:28.672736 | orchestrator | ok: [testbed-node-1] => { 2025-09-18 01:03:28.672747 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672757 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672768 | orchestrator | } 2025-09-18 01:03:28.672779 | orchestrator | ok: [testbed-node-2] => { 2025-09-18 01:03:28.672790 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672800 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672811 | orchestrator | } 2025-09-18 01:03:28.672822 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 01:03:28.672832 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672843 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672854 | orchestrator | } 2025-09-18 01:03:28.672865 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 01:03:28.672875 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672886 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672897 | orchestrator | } 2025-09-18 01:03:28.672907 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 01:03:28.672918 | orchestrator |  "changed": false, 2025-09-18 01:03:28.672929 | orchestrator |  "msg": "All assertions passed" 2025-09-18 01:03:28.672940 | orchestrator | } 2025-09-18 01:03:28.672950 | orchestrator | 2025-09-18 01:03:28.672966 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-18 01:03:28.672978 | orchestrator | Thursday 18 September 2025 00:59:38 +0000 (0:00:01.000) 0:00:06.262 **** 2025-09-18 01:03:28.672989 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.673000 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.673010 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.673021 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.673032 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.673043 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.673053 | orchestrator | 2025-09-18 01:03:28.673064 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-18 01:03:28.673075 | orchestrator | Thursday 18 September 2025 00:59:38 +0000 (0:00:00.646) 0:00:06.909 **** 2025-09-18 01:03:28.673086 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-18 01:03:28.673105 | orchestrator | 2025-09-18 01:03:28.673116 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-18 01:03:28.673127 | orchestrator | Thursday 18 September 2025 00:59:42 +0000 (0:00:03.575) 0:00:10.485 **** 2025-09-18 01:03:28.673138 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-18 01:03:28.673149 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-18 01:03:28.673160 | orchestrator | 2025-09-18 01:03:28.673171 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-18 01:03:28.673182 | orchestrator | Thursday 18 September 2025 00:59:50 +0000 (0:00:07.650) 0:00:18.135 **** 2025-09-18 01:03:28.673216 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:03:28.673244 | orchestrator | 2025-09-18 01:03:28.673265 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-18 01:03:28.673276 | orchestrator | Thursday 18 September 2025 00:59:52 +0000 (0:00:02.871) 0:00:21.006 **** 2025-09-18 01:03:28.673287 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:03:28.673298 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-18 01:03:28.673309 | orchestrator | 2025-09-18 01:03:28.673320 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-18 01:03:28.673331 | orchestrator | Thursday 18 September 2025 00:59:56 +0000 (0:00:03.244) 0:00:24.251 **** 2025-09-18 01:03:28.673342 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:03:28.673353 | orchestrator | 2025-09-18 01:03:28.673451 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-18 01:03:28.673472 | orchestrator | Thursday 18 September 2025 00:59:59 +0000 (0:00:03.121) 0:00:27.373 **** 2025-09-18 01:03:28.674004 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-18 01:03:28.674115 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-18 01:03:28.674128 | orchestrator | 2025-09-18 01:03:28.674138 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 01:03:28.674232 | orchestrator | Thursday 18 September 2025 01:00:07 +0000 (0:00:08.390) 0:00:35.764 **** 2025-09-18 01:03:28.674242 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.674252 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.674271 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.674282 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.674291 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.674532 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.674550 | orchestrator | 2025-09-18 01:03:28.674561 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-18 01:03:28.674571 | orchestrator | Thursday 18 September 2025 01:00:08 +0000 (0:00:00.604) 0:00:36.368 **** 2025-09-18 01:03:28.674581 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.674591 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.674602 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.674612 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.674622 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.674632 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.674642 | orchestrator | 2025-09-18 01:03:28.674652 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-18 01:03:28.674663 | orchestrator | Thursday 18 September 2025 01:00:10 +0000 (0:00:01.724) 0:00:38.092 **** 2025-09-18 01:03:28.674673 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:28.674683 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:28.674693 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:03:28.674703 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:03:28.674714 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:03:28.674724 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:28.674733 | orchestrator | 2025-09-18 01:03:28.674744 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-18 01:03:28.674765 | orchestrator | Thursday 18 September 2025 01:00:11 +0000 (0:00:01.687) 0:00:39.780 **** 2025-09-18 01:03:28.674776 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.674786 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.674796 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.674806 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.674816 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.674826 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.674836 | orchestrator | 2025-09-18 01:03:28.674847 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-18 01:03:28.674857 | orchestrator | Thursday 18 September 2025 01:00:13 +0000 (0:00:01.595) 0:00:41.376 **** 2025-09-18 01:03:28.674876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.674892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.674903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.674942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.674960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.674975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.674986 | orchestrator | 2025-09-18 01:03:28.674997 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-18 01:03:28.675007 | orchestrator | Thursday 18 September 2025 01:00:15 +0000 (0:00:02.355) 0:00:43.731 **** 2025-09-18 01:03:28.675018 | orchestrator | [WARNING]: Skipped 2025-09-18 01:03:28.675029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-18 01:03:28.675040 | orchestrator | due to this access issue: 2025-09-18 01:03:28.675050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-18 01:03:28.675060 | orchestrator | a directory 2025-09-18 01:03:28.675071 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:03:28.675081 | orchestrator | 2025-09-18 01:03:28.675092 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 01:03:28.675102 | orchestrator | Thursday 18 September 2025 01:00:16 +0000 (0:00:01.175) 0:00:44.907 **** 2025-09-18 01:03:28.675113 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:03:28.675123 | orchestrator | 2025-09-18 01:03:28.675134 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-18 01:03:28.675144 | orchestrator | Thursday 18 September 2025 01:00:18 +0000 (0:00:01.152) 0:00:46.059 **** 2025-09-18 01:03:28.675158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.675194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.675214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.675231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.675243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.675256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.675272 | orchestrator | 2025-09-18 01:03:28.675285 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-18 01:03:28.675318 | orchestrator | Thursday 18 September 2025 01:00:21 +0000 (0:00:03.110) 0:00:49.170 **** 2025-09-18 01:03:28.675333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675346 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.675363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675391 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.675403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675415 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.675426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675483 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.675495 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.675507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675519 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.675530 | orchestrator | 2025-09-18 01:03:28.675542 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-18 01:03:28.675554 | orchestrator | Thursday 18 September 2025 01:00:23 +0000 (0:00:02.197) 0:00:51.367 **** 2025-09-18 01:03:28.675574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675585 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.675595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675605 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.675615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675631 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.675647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675657 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.675667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675678 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.675692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.675702 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.675712 | orchestrator | 2025-09-18 01:03:28.675721 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-18 01:03:28.675731 | orchestrator | Thursday 18 September 2025 01:00:25 +0000 (0:00:02.066) 0:00:53.434 **** 2025-09-18 01:03:28.675741 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.675750 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.675760 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.675770 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.675779 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.675789 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.675798 | orchestrator | 2025-09-18 01:03:28.675813 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-18 01:03:28.675823 | orchestrator | Thursday 18 September 2025 01:00:27 +0000 (0:00:01.601) 0:00:55.035 **** 2025-09-18 01:03:28.675832 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.675842 | orchestrator | 2025-09-18 01:03:28.675851 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-18 01:03:28.675861 | orchestrator | Thursday 18 September 2025 01:00:27 +0000 (0:00:00.125) 0:00:55.161 **** 2025-09-18 01:03:28.675870 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.675880 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.675889 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.675899 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.675908 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.675918 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.675927 | orchestrator | 2025-09-18 01:03:28.675937 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-18 01:03:28.675946 | orchestrator | Thursday 18 September 2025 01:00:27 +0000 (0:00:00.602) 0:00:55.763 **** 2025-09-18 01:03:28.675963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675974 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.675984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.675994 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.676008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.676018 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.676034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676055 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676064 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676091 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.676101 | orchestrator | 2025-09-18 01:03:28.676111 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-18 01:03:28.676120 | orchestrator | Thursday 18 September 2025 01:00:29 +0000 (0:00:02.032) 0:00:57.796 **** 2025-09-18 01:03:28.676130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676208 | orchestrator | 2025-09-18 01:03:28.676218 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-18 01:03:28.676228 | orchestrator | Thursday 18 September 2025 01:00:33 +0000 (0:00:03.464) 0:01:01.261 **** 2025-09-18 01:03:28.676238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.676412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676434 | orchestrator | 2025-09-18 01:03:28.676445 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-18 01:03:28.676455 | orchestrator | Thursday 18 September 2025 01:00:39 +0000 (0:00:06.195) 0:01:07.456 **** 2025-09-18 01:03:28.676465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.676475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.676493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.676503 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.676513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.676529 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.676544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676554 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676574 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676593 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.676603 | orchestrator | 2025-09-18 01:03:28.676613 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-18 01:03:28.676623 | orchestrator | Thursday 18 September 2025 01:00:41 +0000 (0:00:02.089) 0:01:09.545 **** 2025-09-18 01:03:28.676632 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676642 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676651 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.676661 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:28.676670 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:28.676680 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:28.676690 | orchestrator | 2025-09-18 01:03:28.676699 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-18 01:03:28.676714 | orchestrator | Thursday 18 September 2025 01:00:44 +0000 (0:00:03.275) 0:01:12.821 **** 2025-09-18 01:03:28.676724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676740 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676760 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.676784 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.676794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.676836 | orchestrator | 2025-09-18 01:03:28.676846 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-18 01:03:28.676856 | orchestrator | Thursday 18 September 2025 01:00:49 +0000 (0:00:04.324) 0:01:17.146 **** 2025-09-18 01:03:28.676865 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.676875 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.676884 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.676894 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676904 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.676913 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676923 | orchestrator | 2025-09-18 01:03:28.676932 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-18 01:03:28.676942 | orchestrator | Thursday 18 September 2025 01:00:51 +0000 (0:00:02.839) 0:01:19.985 **** 2025-09-18 01:03:28.676951 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.676961 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.676970 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.676980 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.676989 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.676999 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677008 | orchestrator | 2025-09-18 01:03:28.677022 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-18 01:03:28.677032 | orchestrator | Thursday 18 September 2025 01:00:54 +0000 (0:00:02.551) 0:01:22.536 **** 2025-09-18 01:03:28.677042 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677051 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677061 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677070 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677080 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677089 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677099 | orchestrator | 2025-09-18 01:03:28.677109 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-18 01:03:28.677118 | orchestrator | Thursday 18 September 2025 01:00:56 +0000 (0:00:02.450) 0:01:24.987 **** 2025-09-18 01:03:28.677128 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677138 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677147 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677157 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677166 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677176 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677185 | orchestrator | 2025-09-18 01:03:28.677195 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-18 01:03:28.677205 | orchestrator | Thursday 18 September 2025 01:00:59 +0000 (0:00:02.611) 0:01:27.599 **** 2025-09-18 01:03:28.677214 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677224 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677233 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677243 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677252 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677262 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677271 | orchestrator | 2025-09-18 01:03:28.677281 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-18 01:03:28.677290 | orchestrator | Thursday 18 September 2025 01:01:01 +0000 (0:00:02.414) 0:01:30.013 **** 2025-09-18 01:03:28.677305 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677315 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677324 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677334 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677343 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677353 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677362 | orchestrator | 2025-09-18 01:03:28.677396 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-18 01:03:28.677407 | orchestrator | Thursday 18 September 2025 01:01:04 +0000 (0:00:02.268) 0:01:32.282 **** 2025-09-18 01:03:28.677417 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677427 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677436 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677446 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677455 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677465 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677474 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677484 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677498 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677508 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677518 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 01:03:28.677528 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677537 | orchestrator | 2025-09-18 01:03:28.677547 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-18 01:03:28.677556 | orchestrator | Thursday 18 September 2025 01:01:06 +0000 (0:00:02.642) 0:01:34.924 **** 2025-09-18 01:03:28.677566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677576 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677601 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677627 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677647 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 2025-09-18 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:28.677674 | orchestrator | 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677695 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677705 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677715 | orchestrator | 2025-09-18 01:03:28.677724 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-18 01:03:28.677734 | orchestrator | Thursday 18 September 2025 01:01:08 +0000 (0:00:01.565) 0:01:36.490 **** 2025-09-18 01:03:28.677748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677764 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677784 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677810 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677829 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.677862 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.677882 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677892 | orchestrator | 2025-09-18 01:03:28.677902 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-18 01:03:28.677911 | orchestrator | Thursday 18 September 2025 01:01:10 +0000 (0:00:01.936) 0:01:38.426 **** 2025-09-18 01:03:28.677921 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.677931 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.677940 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.677950 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.677959 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.677969 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.677978 | orchestrator | 2025-09-18 01:03:28.677988 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-18 01:03:28.677997 | orchestrator | Thursday 18 September 2025 01:01:12 +0000 (0:00:02.127) 0:01:40.553 **** 2025-09-18 01:03:28.678007 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678065 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678076 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678086 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:03:28.678096 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:03:28.678105 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:03:28.678115 | orchestrator | 2025-09-18 01:03:28.678124 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-18 01:03:28.678134 | orchestrator | Thursday 18 September 2025 01:01:15 +0000 (0:00:03.345) 0:01:43.899 **** 2025-09-18 01:03:28.678144 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678153 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678163 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678173 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678182 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678192 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678201 | orchestrator | 2025-09-18 01:03:28.678211 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-18 01:03:28.678220 | orchestrator | Thursday 18 September 2025 01:01:18 +0000 (0:00:02.702) 0:01:46.601 **** 2025-09-18 01:03:28.678236 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678246 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678255 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678265 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678274 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678284 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678293 | orchestrator | 2025-09-18 01:03:28.678303 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-18 01:03:28.678312 | orchestrator | Thursday 18 September 2025 01:01:22 +0000 (0:00:04.257) 0:01:50.859 **** 2025-09-18 01:03:28.678322 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678331 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678350 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678359 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678369 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678392 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678402 | orchestrator | 2025-09-18 01:03:28.678411 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-18 01:03:28.678421 | orchestrator | Thursday 18 September 2025 01:01:24 +0000 (0:00:01.944) 0:01:52.804 **** 2025-09-18 01:03:28.678430 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678439 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678449 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678458 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678468 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678477 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678487 | orchestrator | 2025-09-18 01:03:28.678496 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-18 01:03:28.678506 | orchestrator | Thursday 18 September 2025 01:01:27 +0000 (0:00:02.772) 0:01:55.576 **** 2025-09-18 01:03:28.678515 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678525 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678534 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678544 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678553 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678562 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678572 | orchestrator | 2025-09-18 01:03:28.678582 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-18 01:03:28.678591 | orchestrator | Thursday 18 September 2025 01:01:29 +0000 (0:00:02.404) 0:01:57.981 **** 2025-09-18 01:03:28.678601 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678610 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678620 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678629 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678639 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678648 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678658 | orchestrator | 2025-09-18 01:03:28.678672 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-18 01:03:28.678682 | orchestrator | Thursday 18 September 2025 01:01:33 +0000 (0:00:03.142) 0:02:01.123 **** 2025-09-18 01:03:28.678692 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678701 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678711 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678720 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678730 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678739 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678749 | orchestrator | 2025-09-18 01:03:28.678759 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-18 01:03:28.678768 | orchestrator | Thursday 18 September 2025 01:01:35 +0000 (0:00:02.560) 0:02:03.684 **** 2025-09-18 01:03:28.678778 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678788 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678797 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678807 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678816 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678826 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.678836 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678845 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.678855 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678871 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678880 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 01:03:28.678890 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.678899 | orchestrator | 2025-09-18 01:03:28.678909 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-18 01:03:28.678919 | orchestrator | Thursday 18 September 2025 01:01:38 +0000 (0:00:03.336) 0:02:07.020 **** 2025-09-18 01:03:28.678934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.678945 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.678955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.678965 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.678978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.678989 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.678999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.679015 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.679025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 01:03:28.679035 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.679051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 01:03:28.679062 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.679071 | orchestrator | 2025-09-18 01:03:28.679081 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-18 01:03:28.679091 | orchestrator | Thursday 18 September 2025 01:01:41 +0000 (0:00:02.523) 0:02:09.543 **** 2025-09-18 01:03:28.679101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.679115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.679125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.679141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 01:03:28.679158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.679168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 01:03:28.679178 | orchestrator | 2025-09-18 01:03:28.679188 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 01:03:28.679198 | orchestrator | Thursday 18 September 2025 01:01:45 +0000 (0:00:04.118) 0:02:13.662 **** 2025-09-18 01:03:28.679207 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:28.679217 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:28.679226 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:28.679236 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:03:28.679245 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:03:28.679255 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:03:28.679265 | orchestrator | 2025-09-18 01:03:28.679274 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-18 01:03:28.679288 | orchestrator | Thursday 18 September 2025 01:01:46 +0000 (0:00:00.510) 0:02:14.173 **** 2025-09-18 01:03:28.679298 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:28.679313 | orchestrator | 2025-09-18 01:03:28.679323 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-18 01:03:28.679332 | orchestrator | Thursday 18 September 2025 01:01:48 +0000 (0:00:02.239) 0:02:16.412 **** 2025-09-18 01:03:28.679342 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:28.679351 | orchestrator | 2025-09-18 01:03:28.679361 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-18 01:03:28.679383 | orchestrator | Thursday 18 September 2025 01:01:50 +0000 (0:00:02.395) 0:02:18.808 **** 2025-09-18 01:03:28.679393 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:28.679402 | orchestrator | 2025-09-18 01:03:28.679412 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679422 | orchestrator | Thursday 18 September 2025 01:02:37 +0000 (0:00:46.581) 0:03:05.390 **** 2025-09-18 01:03:28.679431 | orchestrator | 2025-09-18 01:03:28.679441 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679450 | orchestrator | Thursday 18 September 2025 01:02:37 +0000 (0:00:00.154) 0:03:05.544 **** 2025-09-18 01:03:28.679460 | orchestrator | 2025-09-18 01:03:28.679469 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679479 | orchestrator | Thursday 18 September 2025 01:02:37 +0000 (0:00:00.424) 0:03:05.969 **** 2025-09-18 01:03:28.679488 | orchestrator | 2025-09-18 01:03:28.679498 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679518 | orchestrator | Thursday 18 September 2025 01:02:38 +0000 (0:00:00.142) 0:03:06.111 **** 2025-09-18 01:03:28.679528 | orchestrator | 2025-09-18 01:03:28.679537 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679547 | orchestrator | Thursday 18 September 2025 01:02:38 +0000 (0:00:00.064) 0:03:06.175 **** 2025-09-18 01:03:28.679556 | orchestrator | 2025-09-18 01:03:28.679566 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 01:03:28.679576 | orchestrator | Thursday 18 September 2025 01:02:38 +0000 (0:00:00.057) 0:03:06.233 **** 2025-09-18 01:03:28.679585 | orchestrator | 2025-09-18 01:03:28.679595 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-18 01:03:28.679604 | orchestrator | Thursday 18 September 2025 01:02:38 +0000 (0:00:00.054) 0:03:06.287 **** 2025-09-18 01:03:28.679613 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:28.679623 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:28.679633 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:28.679642 | orchestrator | 2025-09-18 01:03:28.679652 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-18 01:03:28.679661 | orchestrator | Thursday 18 September 2025 01:03:01 +0000 (0:00:23.660) 0:03:29.947 **** 2025-09-18 01:03:28.679671 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:03:28.679681 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:03:28.679690 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:03:28.679700 | orchestrator | 2025-09-18 01:03:28.679709 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:03:28.679719 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 01:03:28.679734 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-18 01:03:28.679744 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-18 01:03:28.679754 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 01:03:28.679764 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 01:03:28.679779 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 01:03:28.679789 | orchestrator | 2025-09-18 01:03:28.679799 | orchestrator | 2025-09-18 01:03:28.679808 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:03:28.679818 | orchestrator | Thursday 18 September 2025 01:03:27 +0000 (0:00:25.445) 0:03:55.393 **** 2025-09-18 01:03:28.679828 | orchestrator | =============================================================================== 2025-09-18 01:03:28.679837 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.58s 2025-09-18 01:03:28.679847 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 25.45s 2025-09-18 01:03:28.679856 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.66s 2025-09-18 01:03:28.679866 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.39s 2025-09-18 01:03:28.679876 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.65s 2025-09-18 01:03:28.679885 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.20s 2025-09-18 01:03:28.679895 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.32s 2025-09-18 01:03:28.679904 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.26s 2025-09-18 01:03:28.679914 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.12s 2025-09-18 01:03:28.679923 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.58s 2025-09-18 01:03:28.679937 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.46s 2025-09-18 01:03:28.679946 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.35s 2025-09-18 01:03:28.679956 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.34s 2025-09-18 01:03:28.679965 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.28s 2025-09-18 01:03:28.679975 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.24s 2025-09-18 01:03:28.679984 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.14s 2025-09-18 01:03:28.679994 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.12s 2025-09-18 01:03:28.680003 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.11s 2025-09-18 01:03:28.680013 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.87s 2025-09-18 01:03:28.680022 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.84s 2025-09-18 01:03:31.730142 | orchestrator | 2025-09-18 01:03:31 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:31.732111 | orchestrator | 2025-09-18 01:03:31 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:31.734993 | orchestrator | 2025-09-18 01:03:31 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:31.737767 | orchestrator | 2025-09-18 01:03:31 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:31.738130 | orchestrator | 2025-09-18 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:34.784790 | orchestrator | 2025-09-18 01:03:34 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:34.787338 | orchestrator | 2025-09-18 01:03:34 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:34.791669 | orchestrator | 2025-09-18 01:03:34 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:34.795611 | orchestrator | 2025-09-18 01:03:34 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:34.795687 | orchestrator | 2025-09-18 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:37.842098 | orchestrator | 2025-09-18 01:03:37 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state STARTED 2025-09-18 01:03:37.843651 | orchestrator | 2025-09-18 01:03:37 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:37.845243 | orchestrator | 2025-09-18 01:03:37 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:37.846970 | orchestrator | 2025-09-18 01:03:37 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:37.846994 | orchestrator | 2025-09-18 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:40.894951 | orchestrator | 2025-09-18 01:03:40 | INFO  | Task d2d1895d-8b9b-4961-91b6-042cdde2b316 is in state SUCCESS 2025-09-18 01:03:40.897512 | orchestrator | 2025-09-18 01:03:40.897556 | orchestrator | 2025-09-18 01:03:40.897569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:03:40.897581 | orchestrator | 2025-09-18 01:03:40.897592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:03:40.897604 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:00.427) 0:00:00.427 **** 2025-09-18 01:03:40.897616 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:40.897628 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:40.897639 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:40.897650 | orchestrator | 2025-09-18 01:03:40.897660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:03:40.897672 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:00.295) 0:00:00.722 **** 2025-09-18 01:03:40.897683 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-18 01:03:40.897695 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-18 01:03:40.897706 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-18 01:03:40.897716 | orchestrator | 2025-09-18 01:03:40.897727 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-18 01:03:40.897738 | orchestrator | 2025-09-18 01:03:40.897749 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 01:03:40.897760 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:00.348) 0:00:01.070 **** 2025-09-18 01:03:40.897771 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:03:40.897783 | orchestrator | 2025-09-18 01:03:40.897793 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-18 01:03:40.897804 | orchestrator | Thursday 18 September 2025 01:00:39 +0000 (0:00:00.702) 0:00:01.772 **** 2025-09-18 01:03:40.897815 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-18 01:03:40.897825 | orchestrator | 2025-09-18 01:03:40.897836 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-18 01:03:40.897847 | orchestrator | Thursday 18 September 2025 01:00:43 +0000 (0:00:03.741) 0:00:05.514 **** 2025-09-18 01:03:40.897875 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-18 01:03:40.897887 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-18 01:03:40.897898 | orchestrator | 2025-09-18 01:03:40.897909 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-18 01:03:40.897920 | orchestrator | Thursday 18 September 2025 01:00:50 +0000 (0:00:07.034) 0:00:12.549 **** 2025-09-18 01:03:40.897931 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:03:40.897942 | orchestrator | 2025-09-18 01:03:40.897953 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-18 01:03:40.898253 | orchestrator | Thursday 18 September 2025 01:00:53 +0000 (0:00:03.380) 0:00:15.930 **** 2025-09-18 01:03:40.898308 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:03:40.898321 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-18 01:03:40.898334 | orchestrator | 2025-09-18 01:03:40.898347 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-18 01:03:40.898388 | orchestrator | Thursday 18 September 2025 01:00:57 +0000 (0:00:03.956) 0:00:19.886 **** 2025-09-18 01:03:40.898401 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:03:40.898413 | orchestrator | 2025-09-18 01:03:40.898426 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-18 01:03:40.898440 | orchestrator | Thursday 18 September 2025 01:01:01 +0000 (0:00:03.700) 0:00:23.587 **** 2025-09-18 01:03:40.898453 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-18 01:03:40.898465 | orchestrator | 2025-09-18 01:03:40.898476 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-18 01:03:40.898487 | orchestrator | Thursday 18 September 2025 01:01:05 +0000 (0:00:04.400) 0:00:27.988 **** 2025-09-18 01:03:40.898501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.898543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.898556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.898576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.898919 | orchestrator | 2025-09-18 01:03:40.898931 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-18 01:03:40.898942 | orchestrator | Thursday 18 September 2025 01:01:09 +0000 (0:00:03.345) 0:00:31.333 **** 2025-09-18 01:03:40.898953 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.898964 | orchestrator | 2025-09-18 01:03:40.898975 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-18 01:03:40.898986 | orchestrator | Thursday 18 September 2025 01:01:09 +0000 (0:00:00.137) 0:00:31.470 **** 2025-09-18 01:03:40.898997 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.899008 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.899019 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.899030 | orchestrator | 2025-09-18 01:03:40.899040 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 01:03:40.899051 | orchestrator | Thursday 18 September 2025 01:01:09 +0000 (0:00:00.486) 0:00:31.957 **** 2025-09-18 01:03:40.899062 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:03:40.899073 | orchestrator | 2025-09-18 01:03:40.899084 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-18 01:03:40.899096 | orchestrator | Thursday 18 September 2025 01:01:10 +0000 (0:00:00.569) 0:00:32.526 **** 2025-09-18 01:03:40.899107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.899126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.899155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.899167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.899996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.900013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.900045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.900058 | orchestrator | 2025-09-18 01:03:40.900071 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-18 01:03:40.900083 | orchestrator | Thursday 18 September 2025 01:01:16 +0000 (0:00:06.411) 0:00:38.938 **** 2025-09-18 01:03:40.900095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900256 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.900301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900383 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.900397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900530 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.900543 | orchestrator | 2025-09-18 01:03:40.900555 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-18 01:03:40.900569 | orchestrator | Thursday 18 September 2025 01:01:17 +0000 (0:00:01.093) 0:00:40.032 **** 2025-09-18 01:03:40.900582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900709 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.900723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900839 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.900851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.900863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.900881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.900967 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.900978 | orchestrator | 2025-09-18 01:03:40.901067 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-18 01:03:40.901082 | orchestrator | Thursday 18 September 2025 01:01:21 +0000 (0:00:03.262) 0:00:43.294 **** 2025-09-18 01:03:40.901094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901552 | orchestrator | 2025-09-18 01:03:40.901564 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-18 01:03:40.901575 | orchestrator | Thursday 18 September 2025 01:01:28 +0000 (0:00:07.274) 0:00:50.568 **** 2025-09-18 01:03:40.901593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.901643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.901854 | orchestrator | 2025-09-18 01:03:40.901865 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-18 01:03:40.901876 | orchestrator | Thursday 18 September 2025 01:01:49 +0000 (0:00:20.726) 0:01:11.295 **** 2025-09-18 01:03:40.901888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 01:03:40.901899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 01:03:40.901910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 01:03:40.901920 | orchestrator | 2025-09-18 01:03:40.901931 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-18 01:03:40.901942 | orchestrator | Thursday 18 September 2025 01:01:53 +0000 (0:00:04.598) 0:01:15.893 **** 2025-09-18 01:03:40.901959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 01:03:40.901976 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 01:03:40.901987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 01:03:40.901998 | orchestrator | 2025-09-18 01:03:40.902008 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-18 01:03:40.902062 | orchestrator | Thursday 18 September 2025 01:01:56 +0000 (0:00:02.651) 0:01:18.545 **** 2025-09-18 01:03:40.902077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902393 | orchestrator | 2025-09-18 01:03:40.902405 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-18 01:03:40.902416 | orchestrator | Thursday 18 September 2025 01:01:59 +0000 (0:00:02.840) 0:01:21.385 **** 2025-09-18 01:03:40.902437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.902691 | orchestrator | 2025-09-18 01:03:40.902702 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 01:03:40.902713 | orchestrator | Thursday 18 September 2025 01:02:01 +0000 (0:00:02.562) 0:01:23.948 **** 2025-09-18 01:03:40.902724 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.902735 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.902746 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.902757 | orchestrator | 2025-09-18 01:03:40.902768 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-18 01:03:40.902779 | orchestrator | Thursday 18 September 2025 01:02:02 +0000 (0:00:00.349) 0:01:24.298 **** 2025-09-18 01:03:40.902795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.902818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902877 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.902893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.902905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.902917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.902975 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.902991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 01:03:40.903003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 01:03:40.903014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.903026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.903043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.903060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:03:40.903072 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.903083 | orchestrator | 2025-09-18 01:03:40.903094 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-18 01:03:40.903105 | orchestrator | Thursday 18 September 2025 01:02:03 +0000 (0:00:01.001) 0:01:25.299 **** 2025-09-18 01:03:40.903120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.903133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.903145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 01:03:40.903164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:03:40.903395 | orchestrator | 2025-09-18 01:03:40.903406 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 01:03:40.903417 | orchestrator | Thursday 18 September 2025 01:02:08 +0000 (0:00:05.056) 0:01:30.356 **** 2025-09-18 01:03:40.903428 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:40.903439 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:40.903450 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:40.903461 | orchestrator | 2025-09-18 01:03:40.903472 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-18 01:03:40.903483 | orchestrator | Thursday 18 September 2025 01:02:08 +0000 (0:00:00.331) 0:01:30.687 **** 2025-09-18 01:03:40.903494 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-18 01:03:40.903505 | orchestrator | 2025-09-18 01:03:40.903516 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-18 01:03:40.903527 | orchestrator | Thursday 18 September 2025 01:02:10 +0000 (0:00:02.466) 0:01:33.154 **** 2025-09-18 01:03:40.903538 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 01:03:40.903549 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-18 01:03:40.903560 | orchestrator | 2025-09-18 01:03:40.903571 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-18 01:03:40.903581 | orchestrator | Thursday 18 September 2025 01:02:13 +0000 (0:00:02.558) 0:01:35.713 **** 2025-09-18 01:03:40.903592 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.903603 | orchestrator | 2025-09-18 01:03:40.903614 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 01:03:40.903624 | orchestrator | Thursday 18 September 2025 01:02:32 +0000 (0:00:19.063) 0:01:54.777 **** 2025-09-18 01:03:40.903635 | orchestrator | 2025-09-18 01:03:40.903646 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 01:03:40.903657 | orchestrator | Thursday 18 September 2025 01:02:32 +0000 (0:00:00.289) 0:01:55.067 **** 2025-09-18 01:03:40.903668 | orchestrator | 2025-09-18 01:03:40.903683 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 01:03:40.903694 | orchestrator | Thursday 18 September 2025 01:02:32 +0000 (0:00:00.061) 0:01:55.129 **** 2025-09-18 01:03:40.903705 | orchestrator | 2025-09-18 01:03:40.903716 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-18 01:03:40.903727 | orchestrator | Thursday 18 September 2025 01:02:32 +0000 (0:00:00.072) 0:01:55.202 **** 2025-09-18 01:03:40.903744 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.903755 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.903765 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.903776 | orchestrator | 2025-09-18 01:03:40.903787 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-18 01:03:40.903798 | orchestrator | Thursday 18 September 2025 01:02:41 +0000 (0:00:08.467) 0:02:03.670 **** 2025-09-18 01:03:40.903809 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.903820 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.903830 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.903841 | orchestrator | 2025-09-18 01:03:40.903852 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-18 01:03:40.903863 | orchestrator | Thursday 18 September 2025 01:02:54 +0000 (0:00:13.419) 0:02:17.089 **** 2025-09-18 01:03:40.903874 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.903884 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.903895 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.903906 | orchestrator | 2025-09-18 01:03:40.903917 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-18 01:03:40.903928 | orchestrator | Thursday 18 September 2025 01:03:06 +0000 (0:00:12.065) 0:02:29.155 **** 2025-09-18 01:03:40.903939 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.903949 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.903961 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.903971 | orchestrator | 2025-09-18 01:03:40.903982 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-18 01:03:40.903993 | orchestrator | Thursday 18 September 2025 01:03:13 +0000 (0:00:06.488) 0:02:35.643 **** 2025-09-18 01:03:40.904004 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.904015 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.904025 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.904036 | orchestrator | 2025-09-18 01:03:40.904047 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-18 01:03:40.904058 | orchestrator | Thursday 18 September 2025 01:03:24 +0000 (0:00:10.901) 0:02:46.545 **** 2025-09-18 01:03:40.904068 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.904079 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:40.904090 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:40.904100 | orchestrator | 2025-09-18 01:03:40.904111 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-18 01:03:40.904122 | orchestrator | Thursday 18 September 2025 01:03:32 +0000 (0:00:08.478) 0:02:55.024 **** 2025-09-18 01:03:40.904133 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:40.904144 | orchestrator | 2025-09-18 01:03:40.904154 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:03:40.904165 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:03:40.904177 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:03:40.904188 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:03:40.904198 | orchestrator | 2025-09-18 01:03:40.904209 | orchestrator | 2025-09-18 01:03:40.904225 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:03:40.904236 | orchestrator | Thursday 18 September 2025 01:03:40 +0000 (0:00:07.431) 0:03:02.455 **** 2025-09-18 01:03:40.904247 | orchestrator | =============================================================================== 2025-09-18 01:03:40.904258 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.73s 2025-09-18 01:03:40.904269 | orchestrator | designate : Running Designate bootstrap container ---------------------- 19.06s 2025-09-18 01:03:40.904297 | orchestrator | designate : Restart designate-api container ---------------------------- 13.42s 2025-09-18 01:03:40.904308 | orchestrator | designate : Restart designate-central container ------------------------ 12.07s 2025-09-18 01:03:40.904319 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.90s 2025-09-18 01:03:40.904330 | orchestrator | designate : Restart designate-worker container -------------------------- 8.48s 2025-09-18 01:03:40.904340 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.47s 2025-09-18 01:03:40.904351 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.43s 2025-09-18 01:03:40.904419 | orchestrator | designate : Copying over config.json files for services ----------------- 7.27s 2025-09-18 01:03:40.904431 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.04s 2025-09-18 01:03:40.904441 | orchestrator | designate : Restart designate-producer container ------------------------ 6.49s 2025-09-18 01:03:40.904453 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.41s 2025-09-18 01:03:40.904464 | orchestrator | designate : Check designate containers ---------------------------------- 5.06s 2025-09-18 01:03:40.904475 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.60s 2025-09-18 01:03:40.904486 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.40s 2025-09-18 01:03:40.904497 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.96s 2025-09-18 01:03:40.904513 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.74s 2025-09-18 01:03:40.904524 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.70s 2025-09-18 01:03:40.904535 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.38s 2025-09-18 01:03:40.904546 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.35s 2025-09-18 01:03:40.904556 | orchestrator | 2025-09-18 01:03:40 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:40.904568 | orchestrator | 2025-09-18 01:03:40 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:40.904579 | orchestrator | 2025-09-18 01:03:40 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:40.904590 | orchestrator | 2025-09-18 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:43.943018 | orchestrator | 2025-09-18 01:03:43 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:43.944429 | orchestrator | 2025-09-18 01:03:43 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:43.945056 | orchestrator | 2025-09-18 01:03:43 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:43.945758 | orchestrator | 2025-09-18 01:03:43 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:43.945782 | orchestrator | 2025-09-18 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:46.973147 | orchestrator | 2025-09-18 01:03:46 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:46.973664 | orchestrator | 2025-09-18 01:03:46 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:46.975263 | orchestrator | 2025-09-18 01:03:46 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:46.976048 | orchestrator | 2025-09-18 01:03:46 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:46.976124 | orchestrator | 2025-09-18 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:50.028808 | orchestrator | 2025-09-18 01:03:50 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:50.029315 | orchestrator | 2025-09-18 01:03:50 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state STARTED 2025-09-18 01:03:50.030372 | orchestrator | 2025-09-18 01:03:50 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:50.031231 | orchestrator | 2025-09-18 01:03:50 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:50.031609 | orchestrator | 2025-09-18 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:53.070874 | orchestrator | 2025-09-18 01:03:53 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:53.070960 | orchestrator | 2025-09-18 01:03:53 | INFO  | Task 61b9ddab-1edf-4326-ae4f-5686631f6085 is in state SUCCESS 2025-09-18 01:03:53.072528 | orchestrator | 2025-09-18 01:03:53.072559 | orchestrator | 2025-09-18 01:03:53.072567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:03:53.072575 | orchestrator | 2025-09-18 01:03:53.072582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:03:53.072589 | orchestrator | Thursday 18 September 2025 01:02:43 +0000 (0:00:00.806) 0:00:00.806 **** 2025-09-18 01:03:53.072596 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:03:53.072604 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:03:53.072611 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:03:53.072618 | orchestrator | 2025-09-18 01:03:53.072625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:03:53.072632 | orchestrator | Thursday 18 September 2025 01:02:44 +0000 (0:00:00.898) 0:00:01.705 **** 2025-09-18 01:03:53.072639 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-18 01:03:53.072646 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-18 01:03:53.072653 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-18 01:03:53.072660 | orchestrator | 2025-09-18 01:03:53.072667 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-18 01:03:53.072673 | orchestrator | 2025-09-18 01:03:53.072680 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 01:03:53.072687 | orchestrator | Thursday 18 September 2025 01:02:45 +0000 (0:00:00.937) 0:00:02.643 **** 2025-09-18 01:03:53.072694 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:03:53.072701 | orchestrator | 2025-09-18 01:03:53.072708 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-18 01:03:53.072715 | orchestrator | Thursday 18 September 2025 01:02:46 +0000 (0:00:01.450) 0:00:04.093 **** 2025-09-18 01:03:53.072722 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-18 01:03:53.072728 | orchestrator | 2025-09-18 01:03:53.072750 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-18 01:03:53.072757 | orchestrator | Thursday 18 September 2025 01:02:51 +0000 (0:00:04.380) 0:00:08.473 **** 2025-09-18 01:03:53.072763 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-18 01:03:53.072770 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-18 01:03:53.072777 | orchestrator | 2025-09-18 01:03:53.072784 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-18 01:03:53.072790 | orchestrator | Thursday 18 September 2025 01:02:57 +0000 (0:00:06.447) 0:00:14.921 **** 2025-09-18 01:03:53.072797 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:03:53.072804 | orchestrator | 2025-09-18 01:03:53.072811 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-18 01:03:53.072817 | orchestrator | Thursday 18 September 2025 01:03:00 +0000 (0:00:03.270) 0:00:18.191 **** 2025-09-18 01:03:53.072824 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:03:53.072847 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-18 01:03:53.072854 | orchestrator | 2025-09-18 01:03:53.072861 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-18 01:03:53.072868 | orchestrator | Thursday 18 September 2025 01:03:05 +0000 (0:00:04.205) 0:00:22.397 **** 2025-09-18 01:03:53.072875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:03:53.072881 | orchestrator | 2025-09-18 01:03:53.072888 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-18 01:03:53.072895 | orchestrator | Thursday 18 September 2025 01:03:08 +0000 (0:00:03.283) 0:00:25.681 **** 2025-09-18 01:03:53.072902 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-18 01:03:53.072908 | orchestrator | 2025-09-18 01:03:53.072915 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 01:03:53.072922 | orchestrator | Thursday 18 September 2025 01:03:13 +0000 (0:00:04.818) 0:00:30.499 **** 2025-09-18 01:03:53.072929 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.072936 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:53.072943 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:53.072949 | orchestrator | 2025-09-18 01:03:53.072956 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-18 01:03:53.072963 | orchestrator | Thursday 18 September 2025 01:03:13 +0000 (0:00:00.285) 0:00:30.784 **** 2025-09-18 01:03:53.072972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.072994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073018 | orchestrator | 2025-09-18 01:03:53.073025 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-18 01:03:53.073032 | orchestrator | Thursday 18 September 2025 01:03:14 +0000 (0:00:01.514) 0:00:32.299 **** 2025-09-18 01:03:53.073038 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.073045 | orchestrator | 2025-09-18 01:03:53.073107 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-18 01:03:53.073117 | orchestrator | Thursday 18 September 2025 01:03:15 +0000 (0:00:00.268) 0:00:32.568 **** 2025-09-18 01:03:53.073124 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.073130 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:53.073136 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:53.073143 | orchestrator | 2025-09-18 01:03:53.073149 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 01:03:53.073155 | orchestrator | Thursday 18 September 2025 01:03:16 +0000 (0:00:00.850) 0:00:33.418 **** 2025-09-18 01:03:53.073161 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:03:53.073168 | orchestrator | 2025-09-18 01:03:53.073174 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-18 01:03:53.073180 | orchestrator | Thursday 18 September 2025 01:03:16 +0000 (0:00:00.742) 0:00:34.161 **** 2025-09-18 01:03:53.073187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073220 | orchestrator | 2025-09-18 01:03:53.073226 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-18 01:03:53.073236 | orchestrator | Thursday 18 September 2025 01:03:18 +0000 (0:00:01.368) 0:00:35.529 **** 2025-09-18 01:03:53.073243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073249 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.073256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073262 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:53.073273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073279 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:53.073285 | orchestrator | 2025-09-18 01:03:53.073292 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-18 01:03:53.073298 | orchestrator | Thursday 18 September 2025 01:03:18 +0000 (0:00:00.675) 0:00:36.205 **** 2025-09-18 01:03:53.073375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073391 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.073402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073409 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:53.073415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073422 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:53.073428 | orchestrator | 2025-09-18 01:03:53.073434 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-18 01:03:53.073440 | orchestrator | Thursday 18 September 2025 01:03:19 +0000 (0:00:00.608) 0:00:36.814 **** 2025-09-18 01:03:53.073451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073480 | orchestrator | 2025-09-18 01:03:53.073487 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-18 01:03:53.073494 | orchestrator | Thursday 18 September 2025 01:03:20 +0000 (0:00:01.149) 0:00:37.963 **** 2025-09-18 01:03:53.073501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073538 | orchestrator | 2025-09-18 01:03:53.073544 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-18 01:03:53.073551 | orchestrator | Thursday 18 September 2025 01:03:22 +0000 (0:00:01.959) 0:00:39.923 **** 2025-09-18 01:03:53.073558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 01:03:53.073564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 01:03:53.073571 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 01:03:53.073577 | orchestrator | 2025-09-18 01:03:53.073584 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-18 01:03:53.073590 | orchestrator | Thursday 18 September 2025 01:03:24 +0000 (0:00:01.525) 0:00:41.448 **** 2025-09-18 01:03:53.073597 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:53.073604 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:53.073610 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:53.073617 | orchestrator | 2025-09-18 01:03:53.073626 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-18 01:03:53.073633 | orchestrator | Thursday 18 September 2025 01:03:26 +0000 (0:00:01.907) 0:00:43.355 **** 2025-09-18 01:03:53.073640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073647 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:03:53.073654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073661 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:03:53.073672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 01:03:53.073683 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:03:53.073690 | orchestrator | 2025-09-18 01:03:53.073697 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-18 01:03:53.073703 | orchestrator | Thursday 18 September 2025 01:03:27 +0000 (0:00:01.195) 0:00:44.552 **** 2025-09-18 01:03:53.073710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 01:03:53.073735 | orchestrator | 2025-09-18 01:03:53.073742 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-18 01:03:53.073748 | orchestrator | Thursday 18 September 2025 01:03:28 +0000 (0:00:01.698) 0:00:46.250 **** 2025-09-18 01:03:53.073759 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:53.073766 | orchestrator | 2025-09-18 01:03:53.073772 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-18 01:03:53.073779 | orchestrator | Thursday 18 September 2025 01:03:32 +0000 (0:00:03.117) 0:00:49.368 **** 2025-09-18 01:03:53.073786 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:53.073792 | orchestrator | 2025-09-18 01:03:53.073799 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-18 01:03:53.073805 | orchestrator | Thursday 18 September 2025 01:03:34 +0000 (0:00:02.493) 0:00:51.862 **** 2025-09-18 01:03:53.073812 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:53.073818 | orchestrator | 2025-09-18 01:03:53.073825 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 01:03:53.073832 | orchestrator | Thursday 18 September 2025 01:03:46 +0000 (0:00:11.619) 0:01:03.481 **** 2025-09-18 01:03:53.073838 | orchestrator | 2025-09-18 01:03:53.073845 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 01:03:53.073851 | orchestrator | Thursday 18 September 2025 01:03:46 +0000 (0:00:00.065) 0:01:03.547 **** 2025-09-18 01:03:53.073858 | orchestrator | 2025-09-18 01:03:53.073868 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 01:03:53.073874 | orchestrator | Thursday 18 September 2025 01:03:46 +0000 (0:00:00.065) 0:01:03.612 **** 2025-09-18 01:03:53.073881 | orchestrator | 2025-09-18 01:03:53.073887 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-18 01:03:53.073894 | orchestrator | Thursday 18 September 2025 01:03:46 +0000 (0:00:00.078) 0:01:03.691 **** 2025-09-18 01:03:53.073900 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:03:53.073907 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:03:53.073913 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:03:53.073920 | orchestrator | 2025-09-18 01:03:53.073926 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:03:53.073933 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:03:53.073941 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:03:53.073947 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:03:53.073954 | orchestrator | 2025-09-18 01:03:53.073960 | orchestrator | 2025-09-18 01:03:53.073967 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:03:53.073973 | orchestrator | Thursday 18 September 2025 01:03:52 +0000 (0:00:06.038) 0:01:09.730 **** 2025-09-18 01:03:53.073980 | orchestrator | =============================================================================== 2025-09-18 01:03:53.073986 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.62s 2025-09-18 01:03:53.073994 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.45s 2025-09-18 01:03:53.074005 | orchestrator | placement : Restart placement-api container ----------------------------- 6.04s 2025-09-18 01:03:53.074013 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.82s 2025-09-18 01:03:53.074097 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.38s 2025-09-18 01:03:53.074105 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.21s 2025-09-18 01:03:53.074113 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.28s 2025-09-18 01:03:53.074120 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.27s 2025-09-18 01:03:53.074128 | orchestrator | placement : Creating placement databases -------------------------------- 3.12s 2025-09-18 01:03:53.074153 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.49s 2025-09-18 01:03:53.074167 | orchestrator | placement : Copying over placement.conf --------------------------------- 1.96s 2025-09-18 01:03:53.074175 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.91s 2025-09-18 01:03:53.074182 | orchestrator | placement : Check placement containers ---------------------------------- 1.70s 2025-09-18 01:03:53.074190 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2025-09-18 01:03:53.074197 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.52s 2025-09-18 01:03:53.074205 | orchestrator | placement : include_tasks ----------------------------------------------- 1.45s 2025-09-18 01:03:53.074212 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.37s 2025-09-18 01:03:53.074219 | orchestrator | placement : Copying over existing policy file --------------------------- 1.20s 2025-09-18 01:03:53.074226 | orchestrator | placement : Copying over config.json files for services ----------------- 1.15s 2025-09-18 01:03:53.074233 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-09-18 01:03:53.074240 | orchestrator | 2025-09-18 01:03:53 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:53.074248 | orchestrator | 2025-09-18 01:03:53 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:53.074255 | orchestrator | 2025-09-18 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:56.125176 | orchestrator | 2025-09-18 01:03:56 | INFO  | Task cdf86b7a-f5b9-4699-8a2f-788fbfbe4c68 is in state STARTED 2025-09-18 01:03:56.127100 | orchestrator | 2025-09-18 01:03:56 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:56.129424 | orchestrator | 2025-09-18 01:03:56 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:56.131390 | orchestrator | 2025-09-18 01:03:56 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:56.131445 | orchestrator | 2025-09-18 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:03:59.178553 | orchestrator | 2025-09-18 01:03:59 | INFO  | Task cdf86b7a-f5b9-4699-8a2f-788fbfbe4c68 is in state SUCCESS 2025-09-18 01:03:59.178657 | orchestrator | 2025-09-18 01:03:59 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:03:59.179409 | orchestrator | 2025-09-18 01:03:59 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:03:59.181118 | orchestrator | 2025-09-18 01:03:59 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:03:59.181142 | orchestrator | 2025-09-18 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:02.228044 | orchestrator | 2025-09-18 01:04:02 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:02.230599 | orchestrator | 2025-09-18 01:04:02 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:02.233450 | orchestrator | 2025-09-18 01:04:02 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:02.235103 | orchestrator | 2025-09-18 01:04:02 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:02.235151 | orchestrator | 2025-09-18 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:05.277891 | orchestrator | 2025-09-18 01:04:05 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:05.283312 | orchestrator | 2025-09-18 01:04:05 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:05.286701 | orchestrator | 2025-09-18 01:04:05 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:05.289222 | orchestrator | 2025-09-18 01:04:05 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:05.289255 | orchestrator | 2025-09-18 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:08.337883 | orchestrator | 2025-09-18 01:04:08 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:08.339118 | orchestrator | 2025-09-18 01:04:08 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:08.341571 | orchestrator | 2025-09-18 01:04:08 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:08.342991 | orchestrator | 2025-09-18 01:04:08 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:08.343237 | orchestrator | 2025-09-18 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:11.384926 | orchestrator | 2025-09-18 01:04:11 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:11.386557 | orchestrator | 2025-09-18 01:04:11 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:11.388041 | orchestrator | 2025-09-18 01:04:11 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:11.389979 | orchestrator | 2025-09-18 01:04:11 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:11.389991 | orchestrator | 2025-09-18 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:14.429959 | orchestrator | 2025-09-18 01:04:14 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:14.432164 | orchestrator | 2025-09-18 01:04:14 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:14.433717 | orchestrator | 2025-09-18 01:04:14 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:14.435286 | orchestrator | 2025-09-18 01:04:14 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:14.435357 | orchestrator | 2025-09-18 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:17.484016 | orchestrator | 2025-09-18 01:04:17 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:17.484410 | orchestrator | 2025-09-18 01:04:17 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:17.485578 | orchestrator | 2025-09-18 01:04:17 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:17.486584 | orchestrator | 2025-09-18 01:04:17 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:17.486610 | orchestrator | 2025-09-18 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:20.520299 | orchestrator | 2025-09-18 01:04:20 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:20.521026 | orchestrator | 2025-09-18 01:04:20 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:20.522181 | orchestrator | 2025-09-18 01:04:20 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:20.523151 | orchestrator | 2025-09-18 01:04:20 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:20.523179 | orchestrator | 2025-09-18 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:23.545014 | orchestrator | 2025-09-18 01:04:23 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:23.545242 | orchestrator | 2025-09-18 01:04:23 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:23.545881 | orchestrator | 2025-09-18 01:04:23 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:23.546531 | orchestrator | 2025-09-18 01:04:23 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:23.546555 | orchestrator | 2025-09-18 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:26.579449 | orchestrator | 2025-09-18 01:04:26 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:26.579901 | orchestrator | 2025-09-18 01:04:26 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:26.580845 | orchestrator | 2025-09-18 01:04:26 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:26.581752 | orchestrator | 2025-09-18 01:04:26 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:26.581870 | orchestrator | 2025-09-18 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:29.622874 | orchestrator | 2025-09-18 01:04:29 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:29.623660 | orchestrator | 2025-09-18 01:04:29 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:29.625521 | orchestrator | 2025-09-18 01:04:29 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:29.627238 | orchestrator | 2025-09-18 01:04:29 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:29.627269 | orchestrator | 2025-09-18 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:32.679897 | orchestrator | 2025-09-18 01:04:32 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:32.682430 | orchestrator | 2025-09-18 01:04:32 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:32.684358 | orchestrator | 2025-09-18 01:04:32 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:32.685800 | orchestrator | 2025-09-18 01:04:32 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:32.685989 | orchestrator | 2025-09-18 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:35.731065 | orchestrator | 2025-09-18 01:04:35 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:35.731384 | orchestrator | 2025-09-18 01:04:35 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:35.733381 | orchestrator | 2025-09-18 01:04:35 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:35.733994 | orchestrator | 2025-09-18 01:04:35 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:35.734360 | orchestrator | 2025-09-18 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:38.781388 | orchestrator | 2025-09-18 01:04:38 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:38.782888 | orchestrator | 2025-09-18 01:04:38 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:38.785634 | orchestrator | 2025-09-18 01:04:38 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:38.787361 | orchestrator | 2025-09-18 01:04:38 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:38.787593 | orchestrator | 2025-09-18 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:41.832551 | orchestrator | 2025-09-18 01:04:41 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:41.834366 | orchestrator | 2025-09-18 01:04:41 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:41.836691 | orchestrator | 2025-09-18 01:04:41 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:41.839201 | orchestrator | 2025-09-18 01:04:41 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:41.839445 | orchestrator | 2025-09-18 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:44.888689 | orchestrator | 2025-09-18 01:04:44 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:44.888787 | orchestrator | 2025-09-18 01:04:44 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:44.888801 | orchestrator | 2025-09-18 01:04:44 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:44.888812 | orchestrator | 2025-09-18 01:04:44 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:44.888824 | orchestrator | 2025-09-18 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:47.917758 | orchestrator | 2025-09-18 01:04:47 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:47.920029 | orchestrator | 2025-09-18 01:04:47 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:47.922408 | orchestrator | 2025-09-18 01:04:47 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:47.924555 | orchestrator | 2025-09-18 01:04:47 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:47.924582 | orchestrator | 2025-09-18 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:50.969739 | orchestrator | 2025-09-18 01:04:50 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:50.969963 | orchestrator | 2025-09-18 01:04:50 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:50.971883 | orchestrator | 2025-09-18 01:04:50 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:50.973249 | orchestrator | 2025-09-18 01:04:50 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:50.973413 | orchestrator | 2025-09-18 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:54.014353 | orchestrator | 2025-09-18 01:04:54 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:54.015937 | orchestrator | 2025-09-18 01:04:54 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:54.017377 | orchestrator | 2025-09-18 01:04:54 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:54.019425 | orchestrator | 2025-09-18 01:04:54 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:54.019471 | orchestrator | 2025-09-18 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:04:57.065170 | orchestrator | 2025-09-18 01:04:57 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:04:57.067113 | orchestrator | 2025-09-18 01:04:57 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:04:57.070110 | orchestrator | 2025-09-18 01:04:57 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:04:57.072923 | orchestrator | 2025-09-18 01:04:57 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:04:57.073010 | orchestrator | 2025-09-18 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:00.114987 | orchestrator | 2025-09-18 01:05:00 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:00.116127 | orchestrator | 2025-09-18 01:05:00 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:00.117679 | orchestrator | 2025-09-18 01:05:00 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:00.118676 | orchestrator | 2025-09-18 01:05:00 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:00.118796 | orchestrator | 2025-09-18 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:03.167478 | orchestrator | 2025-09-18 01:05:03 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:03.168851 | orchestrator | 2025-09-18 01:05:03 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:03.170178 | orchestrator | 2025-09-18 01:05:03 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:03.171840 | orchestrator | 2025-09-18 01:05:03 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:03.171868 | orchestrator | 2025-09-18 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:06.205410 | orchestrator | 2025-09-18 01:05:06 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:06.205492 | orchestrator | 2025-09-18 01:05:06 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:06.206166 | orchestrator | 2025-09-18 01:05:06 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:06.206984 | orchestrator | 2025-09-18 01:05:06 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:06.207017 | orchestrator | 2025-09-18 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:09.231015 | orchestrator | 2025-09-18 01:05:09 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:09.231703 | orchestrator | 2025-09-18 01:05:09 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:09.232474 | orchestrator | 2025-09-18 01:05:09 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:09.233393 | orchestrator | 2025-09-18 01:05:09 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:09.233419 | orchestrator | 2025-09-18 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:12.254702 | orchestrator | 2025-09-18 01:05:12 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:12.254816 | orchestrator | 2025-09-18 01:05:12 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:12.255600 | orchestrator | 2025-09-18 01:05:12 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:12.257313 | orchestrator | 2025-09-18 01:05:12 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:12.257410 | orchestrator | 2025-09-18 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:15.295413 | orchestrator | 2025-09-18 01:05:15 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:15.295967 | orchestrator | 2025-09-18 01:05:15 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state STARTED 2025-09-18 01:05:15.296681 | orchestrator | 2025-09-18 01:05:15 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:15.297248 | orchestrator | 2025-09-18 01:05:15 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:15.297392 | orchestrator | 2025-09-18 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:18.330478 | orchestrator | 2025-09-18 01:05:18 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:18.332864 | orchestrator | 2025-09-18 01:05:18 | INFO  | Task 48c438b4-bacf-42f5-a496-bcc57ec24bb4 is in state SUCCESS 2025-09-18 01:05:18.334524 | orchestrator | 2025-09-18 01:05:18.334849 | orchestrator | 2025-09-18 01:05:18.334881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:05:18.334893 | orchestrator | 2025-09-18 01:05:18.334903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:05:18.334913 | orchestrator | Thursday 18 September 2025 01:03:56 +0000 (0:00:00.176) 0:00:00.176 **** 2025-09-18 01:05:18.334923 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:18.334939 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:18.334956 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:18.334972 | orchestrator | 2025-09-18 01:05:18.334989 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:05:18.335006 | orchestrator | Thursday 18 September 2025 01:03:57 +0000 (0:00:00.289) 0:00:00.466 **** 2025-09-18 01:05:18.335024 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-18 01:05:18.335040 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-18 01:05:18.335056 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-18 01:05:18.335071 | orchestrator | 2025-09-18 01:05:18.335081 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-18 01:05:18.335091 | orchestrator | 2025-09-18 01:05:18.335101 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-18 01:05:18.335111 | orchestrator | Thursday 18 September 2025 01:03:57 +0000 (0:00:00.585) 0:00:01.051 **** 2025-09-18 01:05:18.335121 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:18.335130 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:18.335140 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:18.335149 | orchestrator | 2025-09-18 01:05:18.335159 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:05:18.335169 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:05:18.335180 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:05:18.335190 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:05:18.335199 | orchestrator | 2025-09-18 01:05:18.335209 | orchestrator | 2025-09-18 01:05:18.335218 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:05:18.335228 | orchestrator | Thursday 18 September 2025 01:03:58 +0000 (0:00:00.659) 0:00:01.711 **** 2025-09-18 01:05:18.335237 | orchestrator | =============================================================================== 2025-09-18 01:05:18.335247 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.66s 2025-09-18 01:05:18.335309 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-09-18 01:05:18.335320 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-18 01:05:18.335329 | orchestrator | 2025-09-18 01:05:18.335339 | orchestrator | 2025-09-18 01:05:18.335349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:05:18.335358 | orchestrator | 2025-09-18 01:05:18.335386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:05:18.335483 | orchestrator | Thursday 18 September 2025 01:03:31 +0000 (0:00:00.316) 0:00:00.316 **** 2025-09-18 01:05:18.335497 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:18.335508 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:18.335517 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:18.335527 | orchestrator | 2025-09-18 01:05:18.335537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:05:18.335546 | orchestrator | Thursday 18 September 2025 01:03:32 +0000 (0:00:00.359) 0:00:00.676 **** 2025-09-18 01:05:18.335556 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-18 01:05:18.335566 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-18 01:05:18.335575 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-18 01:05:18.335586 | orchestrator | 2025-09-18 01:05:18.335595 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-18 01:05:18.335605 | orchestrator | 2025-09-18 01:05:18.335615 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 01:05:18.335636 | orchestrator | Thursday 18 September 2025 01:03:32 +0000 (0:00:00.377) 0:00:01.053 **** 2025-09-18 01:05:18.335646 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:18.335656 | orchestrator | 2025-09-18 01:05:18.335666 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-18 01:05:18.335675 | orchestrator | Thursday 18 September 2025 01:03:33 +0000 (0:00:00.504) 0:00:01.558 **** 2025-09-18 01:05:18.335685 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-18 01:05:18.335695 | orchestrator | 2025-09-18 01:05:18.335705 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-18 01:05:18.335714 | orchestrator | Thursday 18 September 2025 01:03:36 +0000 (0:00:03.485) 0:00:05.043 **** 2025-09-18 01:05:18.335724 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-18 01:05:18.335734 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-18 01:05:18.335744 | orchestrator | 2025-09-18 01:05:18.335753 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-18 01:05:18.335763 | orchestrator | Thursday 18 September 2025 01:03:42 +0000 (0:00:05.568) 0:00:10.612 **** 2025-09-18 01:05:18.335772 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:05:18.335782 | orchestrator | 2025-09-18 01:05:18.335792 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-18 01:05:18.335801 | orchestrator | Thursday 18 September 2025 01:03:45 +0000 (0:00:02.825) 0:00:13.437 **** 2025-09-18 01:05:18.335823 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:05:18.335833 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-18 01:05:18.335843 | orchestrator | 2025-09-18 01:05:18.335853 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-18 01:05:18.335862 | orchestrator | Thursday 18 September 2025 01:03:49 +0000 (0:00:04.344) 0:00:17.782 **** 2025-09-18 01:05:18.335872 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:05:18.335882 | orchestrator | 2025-09-18 01:05:18.335891 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-18 01:05:18.335901 | orchestrator | Thursday 18 September 2025 01:03:52 +0000 (0:00:03.540) 0:00:21.322 **** 2025-09-18 01:05:18.335910 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-18 01:05:18.335920 | orchestrator | 2025-09-18 01:05:18.335929 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-18 01:05:18.335939 | orchestrator | Thursday 18 September 2025 01:03:56 +0000 (0:00:03.787) 0:00:25.109 **** 2025-09-18 01:05:18.335948 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.335958 | orchestrator | 2025-09-18 01:05:18.335974 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-18 01:05:18.335984 | orchestrator | Thursday 18 September 2025 01:03:59 +0000 (0:00:03.181) 0:00:28.291 **** 2025-09-18 01:05:18.335993 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.336003 | orchestrator | 2025-09-18 01:05:18.336013 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-18 01:05:18.336022 | orchestrator | Thursday 18 September 2025 01:04:03 +0000 (0:00:03.735) 0:00:32.027 **** 2025-09-18 01:05:18.336032 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.336041 | orchestrator | 2025-09-18 01:05:18.336051 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-18 01:05:18.336060 | orchestrator | Thursday 18 September 2025 01:04:07 +0000 (0:00:03.438) 0:00:35.466 **** 2025-09-18 01:05:18.336072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336165 | orchestrator | 2025-09-18 01:05:18.336176 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-18 01:05:18.336188 | orchestrator | Thursday 18 September 2025 01:04:08 +0000 (0:00:01.383) 0:00:36.850 **** 2025-09-18 01:05:18.336198 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.336207 | orchestrator | 2025-09-18 01:05:18.336217 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-18 01:05:18.336226 | orchestrator | Thursday 18 September 2025 01:04:08 +0000 (0:00:00.119) 0:00:36.969 **** 2025-09-18 01:05:18.336236 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.336245 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:18.336281 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:18.336291 | orchestrator | 2025-09-18 01:05:18.336301 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-18 01:05:18.336311 | orchestrator | Thursday 18 September 2025 01:04:09 +0000 (0:00:00.450) 0:00:37.420 **** 2025-09-18 01:05:18.336320 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:05:18.336330 | orchestrator | 2025-09-18 01:05:18.336339 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-18 01:05:18.336354 | orchestrator | Thursday 18 September 2025 01:04:09 +0000 (0:00:00.818) 0:00:38.239 **** 2025-09-18 01:05:18.336364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336450 | orchestrator | 2025-09-18 01:05:18.336460 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-18 01:05:18.336470 | orchestrator | Thursday 18 September 2025 01:04:12 +0000 (0:00:02.377) 0:00:40.616 **** 2025-09-18 01:05:18.336480 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:18.336489 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:18.336499 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:18.336508 | orchestrator | 2025-09-18 01:05:18.336518 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 01:05:18.336533 | orchestrator | Thursday 18 September 2025 01:04:12 +0000 (0:00:00.296) 0:00:40.912 **** 2025-09-18 01:05:18.336543 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:18.336553 | orchestrator | 2025-09-18 01:05:18.336563 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-18 01:05:18.336572 | orchestrator | Thursday 18 September 2025 01:04:13 +0000 (0:00:00.660) 0:00:41.573 **** 2025-09-18 01:05:18.336583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.336618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.336661 | orchestrator | 2025-09-18 01:05:18.336671 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-18 01:05:18.336681 | orchestrator | Thursday 18 September 2025 01:04:15 +0000 (0:00:02.529) 0:00:44.103 **** 2025-09-18 01:05:18.336691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336716 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.336737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336765 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:18.336775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336795 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:18.336805 | orchestrator | 2025-09-18 01:05:18.336814 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-18 01:05:18.336824 | orchestrator | Thursday 18 September 2025 01:04:16 +0000 (0:00:00.633) 0:00:44.737 **** 2025-09-18 01:05:18.336839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336866 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.336882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336903 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:18.336913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.336932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.336942 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:18.336952 | orchestrator | 2025-09-18 01:05:18.337048 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-18 01:05:18.337059 | orchestrator | Thursday 18 September 2025 01:04:17 +0000 (0:00:00.991) 0:00:45.729 **** 2025-09-18 01:05:18.337077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337158 | orchestrator | 2025-09-18 01:05:18.337168 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-18 01:05:18.337178 | orchestrator | Thursday 18 September 2025 01:04:20 +0000 (0:00:02.683) 0:00:48.413 **** 2025-09-18 01:05:18.337188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337343 | orchestrator | 2025-09-18 01:05:18.337353 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-18 01:05:18.337363 | orchestrator | Thursday 18 September 2025 01:04:26 +0000 (0:00:06.255) 0:00:54.668 **** 2025-09-18 01:05:18.337373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.337393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.337403 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:18.337413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.337430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.337440 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.337450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 01:05:18.337461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:18.337476 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:18.337486 | orchestrator | 2025-09-18 01:05:18.337496 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-18 01:05:18.337506 | orchestrator | Thursday 18 September 2025 01:04:26 +0000 (0:00:00.595) 0:00:55.264 **** 2025-09-18 01:05:18.337520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 01:05:18.337556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:18.337596 | orchestrator | 2025-09-18 01:05:18.337606 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 01:05:18.337616 | orchestrator | Thursday 18 September 2025 01:04:28 +0000 (0:00:01.936) 0:00:57.201 **** 2025-09-18 01:05:18.337626 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:18.337635 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:18.337645 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:18.337655 | orchestrator | 2025-09-18 01:05:18.337665 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-18 01:05:18.337674 | orchestrator | Thursday 18 September 2025 01:04:29 +0000 (0:00:00.261) 0:00:57.462 **** 2025-09-18 01:05:18.337684 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.337694 | orchestrator | 2025-09-18 01:05:18.337704 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-18 01:05:18.337714 | orchestrator | Thursday 18 September 2025 01:04:31 +0000 (0:00:02.517) 0:00:59.980 **** 2025-09-18 01:05:18.337723 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.337733 | orchestrator | 2025-09-18 01:05:18.337743 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-18 01:05:18.337753 | orchestrator | Thursday 18 September 2025 01:04:34 +0000 (0:00:02.570) 0:01:02.550 **** 2025-09-18 01:05:18.337767 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.337777 | orchestrator | 2025-09-18 01:05:18.337787 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 01:05:18.337797 | orchestrator | Thursday 18 September 2025 01:04:50 +0000 (0:00:16.209) 0:01:18.760 **** 2025-09-18 01:05:18.337807 | orchestrator | 2025-09-18 01:05:18.337816 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 01:05:18.337826 | orchestrator | Thursday 18 September 2025 01:04:50 +0000 (0:00:00.063) 0:01:18.823 **** 2025-09-18 01:05:18.337836 | orchestrator | 2025-09-18 01:05:18.337845 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 01:05:18.337855 | orchestrator | Thursday 18 September 2025 01:04:50 +0000 (0:00:00.058) 0:01:18.882 **** 2025-09-18 01:05:18.337873 | orchestrator | 2025-09-18 01:05:18.337883 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-18 01:05:18.337893 | orchestrator | Thursday 18 September 2025 01:04:50 +0000 (0:00:00.060) 0:01:18.943 **** 2025-09-18 01:05:18.337905 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.337917 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:18.337929 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:18.337940 | orchestrator | 2025-09-18 01:05:18.337951 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-18 01:05:18.337963 | orchestrator | Thursday 18 September 2025 01:05:03 +0000 (0:00:12.652) 0:01:31.596 **** 2025-09-18 01:05:18.337974 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:18.337985 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:18.337995 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:18.338006 | orchestrator | 2025-09-18 01:05:18.338053 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:05:18.338068 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 01:05:18.338080 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:05:18.338090 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:05:18.338100 | orchestrator | 2025-09-18 01:05:18.338110 | orchestrator | 2025-09-18 01:05:18.338119 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:05:18.338129 | orchestrator | Thursday 18 September 2025 01:05:15 +0000 (0:00:12.100) 0:01:43.696 **** 2025-09-18 01:05:18.338139 | orchestrator | =============================================================================== 2025-09-18 01:05:18.338148 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.21s 2025-09-18 01:05:18.338158 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.65s 2025-09-18 01:05:18.338168 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.10s 2025-09-18 01:05:18.338177 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.26s 2025-09-18 01:05:18.338187 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.57s 2025-09-18 01:05:18.338196 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.34s 2025-09-18 01:05:18.338206 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2025-09-18 01:05:18.338216 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.74s 2025-09-18 01:05:18.338225 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.54s 2025-09-18 01:05:18.338235 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.49s 2025-09-18 01:05:18.338244 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.44s 2025-09-18 01:05:18.338277 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.18s 2025-09-18 01:05:18.338288 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.83s 2025-09-18 01:05:18.338302 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2025-09-18 01:05:18.338312 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.57s 2025-09-18 01:05:18.338322 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.53s 2025-09-18 01:05:18.338332 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.52s 2025-09-18 01:05:18.338341 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.38s 2025-09-18 01:05:18.338351 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.94s 2025-09-18 01:05:18.338361 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.38s 2025-09-18 01:05:18.338378 | orchestrator | 2025-09-18 01:05:18 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:18.338388 | orchestrator | 2025-09-18 01:05:18 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:18.338397 | orchestrator | 2025-09-18 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:21.381826 | orchestrator | 2025-09-18 01:05:21 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:21.384036 | orchestrator | 2025-09-18 01:05:21 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:21.386135 | orchestrator | 2025-09-18 01:05:21 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:21.386385 | orchestrator | 2025-09-18 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:24.433503 | orchestrator | 2025-09-18 01:05:24 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:24.434749 | orchestrator | 2025-09-18 01:05:24 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:24.436676 | orchestrator | 2025-09-18 01:05:24 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:24.436701 | orchestrator | 2025-09-18 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:27.477596 | orchestrator | 2025-09-18 01:05:27 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:27.478669 | orchestrator | 2025-09-18 01:05:27 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:27.480381 | orchestrator | 2025-09-18 01:05:27 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:27.480405 | orchestrator | 2025-09-18 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:30.524911 | orchestrator | 2025-09-18 01:05:30 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:30.525782 | orchestrator | 2025-09-18 01:05:30 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:30.527083 | orchestrator | 2025-09-18 01:05:30 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:30.527111 | orchestrator | 2025-09-18 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:33.583412 | orchestrator | 2025-09-18 01:05:33 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:33.586581 | orchestrator | 2025-09-18 01:05:33 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:33.589658 | orchestrator | 2025-09-18 01:05:33 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:33.590069 | orchestrator | 2025-09-18 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:36.626554 | orchestrator | 2025-09-18 01:05:36 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:36.627158 | orchestrator | 2025-09-18 01:05:36 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:36.628950 | orchestrator | 2025-09-18 01:05:36 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:36.629153 | orchestrator | 2025-09-18 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:39.667727 | orchestrator | 2025-09-18 01:05:39 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:39.670326 | orchestrator | 2025-09-18 01:05:39 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:39.672574 | orchestrator | 2025-09-18 01:05:39 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:39.672640 | orchestrator | 2025-09-18 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:42.719574 | orchestrator | 2025-09-18 01:05:42 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:42.720605 | orchestrator | 2025-09-18 01:05:42 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:42.722442 | orchestrator | 2025-09-18 01:05:42 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:42.722473 | orchestrator | 2025-09-18 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:45.772660 | orchestrator | 2025-09-18 01:05:45 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:45.774282 | orchestrator | 2025-09-18 01:05:45 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:45.776192 | orchestrator | 2025-09-18 01:05:45 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:45.776389 | orchestrator | 2025-09-18 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:48.817712 | orchestrator | 2025-09-18 01:05:48 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state STARTED 2025-09-18 01:05:48.821218 | orchestrator | 2025-09-18 01:05:48 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:48.823113 | orchestrator | 2025-09-18 01:05:48 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:48.823140 | orchestrator | 2025-09-18 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:51.876387 | orchestrator | 2025-09-18 01:05:51 | INFO  | Task 895b9946-147c-44ae-b640-9cac7c8fb4f3 is in state SUCCESS 2025-09-18 01:05:51.878504 | orchestrator | 2025-09-18 01:05:51.878545 | orchestrator | 2025-09-18 01:05:51.878558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:05:51.878571 | orchestrator | 2025-09-18 01:05:51.878582 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-18 01:05:51.878594 | orchestrator | Thursday 18 September 2025 00:57:20 +0000 (0:00:00.294) 0:00:00.294 **** 2025-09-18 01:05:51.878605 | orchestrator | changed: [testbed-manager] 2025-09-18 01:05:51.878617 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.878628 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.878639 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.878649 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.878660 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.878671 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.878682 | orchestrator | 2025-09-18 01:05:51.878693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:05:51.878703 | orchestrator | Thursday 18 September 2025 00:57:20 +0000 (0:00:00.707) 0:00:01.002 **** 2025-09-18 01:05:51.878714 | orchestrator | changed: [testbed-manager] 2025-09-18 01:05:51.879819 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.879853 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.879865 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.879876 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.879887 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.879898 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.879910 | orchestrator | 2025-09-18 01:05:51.879921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:05:51.879933 | orchestrator | Thursday 18 September 2025 00:57:21 +0000 (0:00:00.591) 0:00:01.593 **** 2025-09-18 01:05:51.880062 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-18 01:05:51.880078 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-18 01:05:51.880089 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-18 01:05:51.880100 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-18 01:05:51.880111 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-18 01:05:51.880121 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-18 01:05:51.880132 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-18 01:05:51.880143 | orchestrator | 2025-09-18 01:05:51.880154 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-18 01:05:51.880165 | orchestrator | 2025-09-18 01:05:51.880176 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-18 01:05:51.880187 | orchestrator | Thursday 18 September 2025 00:57:22 +0000 (0:00:00.774) 0:00:02.368 **** 2025-09-18 01:05:51.880198 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.880209 | orchestrator | 2025-09-18 01:05:51.880220 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-18 01:05:51.880251 | orchestrator | Thursday 18 September 2025 00:57:22 +0000 (0:00:00.583) 0:00:02.951 **** 2025-09-18 01:05:51.880263 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-18 01:05:51.880274 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-18 01:05:51.880284 | orchestrator | 2025-09-18 01:05:51.880295 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-18 01:05:51.880306 | orchestrator | Thursday 18 September 2025 00:57:27 +0000 (0:00:04.700) 0:00:07.651 **** 2025-09-18 01:05:51.880317 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 01:05:51.880327 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 01:05:51.880338 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.880348 | orchestrator | 2025-09-18 01:05:51.880359 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-18 01:05:51.880370 | orchestrator | Thursday 18 September 2025 00:57:31 +0000 (0:00:04.332) 0:00:11.984 **** 2025-09-18 01:05:51.880390 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.880403 | orchestrator | 2025-09-18 01:05:51.880416 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-18 01:05:51.880429 | orchestrator | Thursday 18 September 2025 00:57:32 +0000 (0:00:00.652) 0:00:12.636 **** 2025-09-18 01:05:51.880441 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.880454 | orchestrator | 2025-09-18 01:05:51.880467 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-18 01:05:51.880477 | orchestrator | Thursday 18 September 2025 00:57:34 +0000 (0:00:01.610) 0:00:14.246 **** 2025-09-18 01:05:51.880488 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.880498 | orchestrator | 2025-09-18 01:05:51.880509 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 01:05:51.880520 | orchestrator | Thursday 18 September 2025 00:57:37 +0000 (0:00:03.058) 0:00:17.305 **** 2025-09-18 01:05:51.880531 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.880542 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.880553 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.880563 | orchestrator | 2025-09-18 01:05:51.880574 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-18 01:05:51.880585 | orchestrator | Thursday 18 September 2025 00:57:37 +0000 (0:00:00.541) 0:00:17.846 **** 2025-09-18 01:05:51.880596 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.880606 | orchestrator | 2025-09-18 01:05:51.880617 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-18 01:05:51.880628 | orchestrator | Thursday 18 September 2025 00:58:07 +0000 (0:00:30.083) 0:00:47.930 **** 2025-09-18 01:05:51.880638 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.880659 | orchestrator | 2025-09-18 01:05:51.880669 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 01:05:51.880680 | orchestrator | Thursday 18 September 2025 00:58:22 +0000 (0:00:14.757) 0:01:02.687 **** 2025-09-18 01:05:51.880691 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.880701 | orchestrator | 2025-09-18 01:05:51.880712 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 01:05:51.880723 | orchestrator | Thursday 18 September 2025 00:58:34 +0000 (0:00:12.443) 0:01:15.130 **** 2025-09-18 01:05:51.881076 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.881095 | orchestrator | 2025-09-18 01:05:51.881107 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-18 01:05:51.881118 | orchestrator | Thursday 18 September 2025 00:58:36 +0000 (0:00:01.064) 0:01:16.195 **** 2025-09-18 01:05:51.881129 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.881139 | orchestrator | 2025-09-18 01:05:51.881150 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 01:05:51.881161 | orchestrator | Thursday 18 September 2025 00:58:36 +0000 (0:00:00.563) 0:01:16.758 **** 2025-09-18 01:05:51.881172 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.881183 | orchestrator | 2025-09-18 01:05:51.881194 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-18 01:05:51.881205 | orchestrator | Thursday 18 September 2025 00:58:37 +0000 (0:00:00.472) 0:01:17.231 **** 2025-09-18 01:05:51.881216 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.881279 | orchestrator | 2025-09-18 01:05:51.881291 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-18 01:05:51.881302 | orchestrator | Thursday 18 September 2025 00:58:56 +0000 (0:00:19.358) 0:01:36.590 **** 2025-09-18 01:05:51.881313 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.881324 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881335 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881345 | orchestrator | 2025-09-18 01:05:51.881356 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-18 01:05:51.881367 | orchestrator | 2025-09-18 01:05:51.881378 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-18 01:05:51.881389 | orchestrator | Thursday 18 September 2025 00:58:56 +0000 (0:00:00.343) 0:01:36.934 **** 2025-09-18 01:05:51.881400 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.881411 | orchestrator | 2025-09-18 01:05:51.881422 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-18 01:05:51.881433 | orchestrator | Thursday 18 September 2025 00:58:57 +0000 (0:00:00.677) 0:01:37.611 **** 2025-09-18 01:05:51.881444 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881455 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881466 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.881476 | orchestrator | 2025-09-18 01:05:51.881487 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-18 01:05:51.881498 | orchestrator | Thursday 18 September 2025 00:59:00 +0000 (0:00:02.596) 0:01:40.208 **** 2025-09-18 01:05:51.881509 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881520 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881531 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.881542 | orchestrator | 2025-09-18 01:05:51.881553 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-18 01:05:51.881564 | orchestrator | Thursday 18 September 2025 00:59:02 +0000 (0:00:02.663) 0:01:42.872 **** 2025-09-18 01:05:51.881575 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.881586 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881597 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881607 | orchestrator | 2025-09-18 01:05:51.881618 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-18 01:05:51.881644 | orchestrator | Thursday 18 September 2025 00:59:03 +0000 (0:00:00.749) 0:01:43.621 **** 2025-09-18 01:05:51.881655 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 01:05:51.881666 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881677 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 01:05:51.881688 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881699 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-18 01:05:51.881710 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-18 01:05:51.881722 | orchestrator | 2025-09-18 01:05:51.881743 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-18 01:05:51.881758 | orchestrator | Thursday 18 September 2025 00:59:12 +0000 (0:00:08.765) 0:01:52.387 **** 2025-09-18 01:05:51.881771 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.881784 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881796 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881808 | orchestrator | 2025-09-18 01:05:51.881821 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-18 01:05:51.881833 | orchestrator | Thursday 18 September 2025 00:59:12 +0000 (0:00:00.302) 0:01:52.690 **** 2025-09-18 01:05:51.881846 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 01:05:51.881858 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.881871 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 01:05:51.881883 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881897 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 01:05:51.881909 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881922 | orchestrator | 2025-09-18 01:05:51.881934 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-18 01:05:51.881946 | orchestrator | Thursday 18 September 2025 00:59:13 +0000 (0:00:00.683) 0:01:53.374 **** 2025-09-18 01:05:51.881959 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.881971 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.881983 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.881996 | orchestrator | 2025-09-18 01:05:51.882008 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-18 01:05:51.882096 | orchestrator | Thursday 18 September 2025 00:59:13 +0000 (0:00:00.478) 0:01:53.852 **** 2025-09-18 01:05:51.882109 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882122 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882133 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.882144 | orchestrator | 2025-09-18 01:05:51.882155 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-18 01:05:51.882166 | orchestrator | Thursday 18 September 2025 00:59:14 +0000 (0:00:01.070) 0:01:54.923 **** 2025-09-18 01:05:51.882177 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882188 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882356 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.882377 | orchestrator | 2025-09-18 01:05:51.882388 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-18 01:05:51.882399 | orchestrator | Thursday 18 September 2025 00:59:17 +0000 (0:00:02.242) 0:01:57.165 **** 2025-09-18 01:05:51.882410 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882421 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882431 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.882442 | orchestrator | 2025-09-18 01:05:51.882453 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 01:05:51.882464 | orchestrator | Thursday 18 September 2025 00:59:37 +0000 (0:00:20.183) 0:02:17.349 **** 2025-09-18 01:05:51.882475 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882485 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882496 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.882507 | orchestrator | 2025-09-18 01:05:51.882518 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 01:05:51.882539 | orchestrator | Thursday 18 September 2025 00:59:50 +0000 (0:00:13.466) 0:02:30.815 **** 2025-09-18 01:05:51.882550 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.882560 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882571 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882582 | orchestrator | 2025-09-18 01:05:51.882593 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-18 01:05:51.882603 | orchestrator | Thursday 18 September 2025 00:59:51 +0000 (0:00:01.095) 0:02:31.911 **** 2025-09-18 01:05:51.882614 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882625 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882687 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.882697 | orchestrator | 2025-09-18 01:05:51.882707 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-18 01:05:51.882717 | orchestrator | Thursday 18 September 2025 01:00:03 +0000 (0:00:11.372) 0:02:43.284 **** 2025-09-18 01:05:51.882727 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.882737 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882746 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882756 | orchestrator | 2025-09-18 01:05:51.882766 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-18 01:05:51.882776 | orchestrator | Thursday 18 September 2025 01:00:04 +0000 (0:00:00.996) 0:02:44.281 **** 2025-09-18 01:05:51.882785 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.882795 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.882805 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.882815 | orchestrator | 2025-09-18 01:05:51.882824 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-18 01:05:51.882834 | orchestrator | 2025-09-18 01:05:51.882844 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 01:05:51.882854 | orchestrator | Thursday 18 September 2025 01:00:04 +0000 (0:00:00.406) 0:02:44.687 **** 2025-09-18 01:05:51.882864 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.882875 | orchestrator | 2025-09-18 01:05:51.882884 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-18 01:05:51.882894 | orchestrator | Thursday 18 September 2025 01:00:05 +0000 (0:00:00.479) 0:02:45.167 **** 2025-09-18 01:05:51.882904 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-18 01:05:51.882914 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-18 01:05:51.882924 | orchestrator | 2025-09-18 01:05:51.882934 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-18 01:05:51.882944 | orchestrator | Thursday 18 September 2025 01:00:08 +0000 (0:00:03.124) 0:02:48.292 **** 2025-09-18 01:05:51.883008 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-18 01:05:51.883030 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-18 01:05:51.883041 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-18 01:05:51.883051 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-18 01:05:51.883061 | orchestrator | 2025-09-18 01:05:51.883070 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-18 01:05:51.883080 | orchestrator | Thursday 18 September 2025 01:00:14 +0000 (0:00:06.819) 0:02:55.111 **** 2025-09-18 01:05:51.883090 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:05:51.883099 | orchestrator | 2025-09-18 01:05:51.883109 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-18 01:05:51.883118 | orchestrator | Thursday 18 September 2025 01:00:18 +0000 (0:00:03.854) 0:02:58.966 **** 2025-09-18 01:05:51.883128 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:05:51.883146 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-18 01:05:51.883156 | orchestrator | 2025-09-18 01:05:51.883165 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-18 01:05:51.883175 | orchestrator | Thursday 18 September 2025 01:00:22 +0000 (0:00:03.990) 0:03:02.957 **** 2025-09-18 01:05:51.883185 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:05:51.883195 | orchestrator | 2025-09-18 01:05:51.883204 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-18 01:05:51.883214 | orchestrator | Thursday 18 September 2025 01:00:26 +0000 (0:00:03.275) 0:03:06.233 **** 2025-09-18 01:05:51.883241 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-18 01:05:51.883252 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-18 01:05:51.883261 | orchestrator | 2025-09-18 01:05:51.883271 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-18 01:05:51.883348 | orchestrator | Thursday 18 September 2025 01:00:33 +0000 (0:00:07.654) 0:03:13.887 **** 2025-09-18 01:05:51.883367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883485 | orchestrator | 2025-09-18 01:05:51.883495 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-18 01:05:51.883505 | orchestrator | Thursday 18 September 2025 01:00:36 +0000 (0:00:02.496) 0:03:16.383 **** 2025-09-18 01:05:51.883515 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.883524 | orchestrator | 2025-09-18 01:05:51.883534 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-18 01:05:51.883544 | orchestrator | Thursday 18 September 2025 01:00:36 +0000 (0:00:00.274) 0:03:16.657 **** 2025-09-18 01:05:51.883553 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.883563 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.883573 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.883582 | orchestrator | 2025-09-18 01:05:51.883592 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-18 01:05:51.883602 | orchestrator | Thursday 18 September 2025 01:00:36 +0000 (0:00:00.325) 0:03:16.983 **** 2025-09-18 01:05:51.883611 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:05:51.883621 | orchestrator | 2025-09-18 01:05:51.883631 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-18 01:05:51.883640 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:01.322) 0:03:18.306 **** 2025-09-18 01:05:51.883650 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.883673 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.883683 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.883692 | orchestrator | 2025-09-18 01:05:51.883702 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 01:05:51.883712 | orchestrator | Thursday 18 September 2025 01:00:38 +0000 (0:00:00.393) 0:03:18.700 **** 2025-09-18 01:05:51.883721 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.883736 | orchestrator | 2025-09-18 01:05:51.883746 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-18 01:05:51.883756 | orchestrator | Thursday 18 September 2025 01:00:39 +0000 (0:00:00.495) 0:03:19.196 **** 2025-09-18 01:05:51.883767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.883844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.883905 | orchestrator | 2025-09-18 01:05:51.883915 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-18 01:05:51.883925 | orchestrator | Thursday 18 September 2025 01:00:41 +0000 (0:00:02.429) 0:03:21.625 **** 2025-09-18 01:05:51.883936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.883947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.883963 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.883981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.883993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884005 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.884046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884078 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.884089 | orchestrator | 2025-09-18 01:05:51.884100 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-18 01:05:51.884111 | orchestrator | Thursday 18 September 2025 01:00:42 +0000 (0:00:01.387) 0:03:23.012 **** 2025-09-18 01:05:51.884128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884153 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.884194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884245 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.884263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884288 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.884299 | orchestrator | 2025-09-18 01:05:51.884310 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-18 01:05:51.884322 | orchestrator | Thursday 18 September 2025 01:00:44 +0000 (0:00:01.476) 0:03:24.489 **** 2025-09-18 01:05:51.884363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884472 | orchestrator | 2025-09-18 01:05:51.884481 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-18 01:05:51.884491 | orchestrator | Thursday 18 September 2025 01:00:46 +0000 (0:00:02.622) 0:03:27.112 **** 2025-09-18 01:05:51.884507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.884612 | orchestrator | 2025-09-18 01:05:51.884622 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-18 01:05:51.884632 | orchestrator | Thursday 18 September 2025 01:00:54 +0000 (0:00:07.484) 0:03:34.596 **** 2025-09-18 01:05:51.884650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884699 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.884710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884736 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.884751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 01:05:51.884762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.884773 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.884782 | orchestrator | 2025-09-18 01:05:51.884792 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-18 01:05:51.884802 | orchestrator | Thursday 18 September 2025 01:00:55 +0000 (0:00:01.016) 0:03:35.613 **** 2025-09-18 01:05:51.884812 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.884822 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.884831 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.884841 | orchestrator | 2025-09-18 01:05:51.884877 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-18 01:05:51.884894 | orchestrator | Thursday 18 September 2025 01:00:57 +0000 (0:00:02.352) 0:03:37.965 **** 2025-09-18 01:05:51.884904 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.884914 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.884923 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.884933 | orchestrator | 2025-09-18 01:05:51.884942 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-18 01:05:51.884952 | orchestrator | Thursday 18 September 2025 01:00:58 +0000 (0:00:00.596) 0:03:38.562 **** 2025-09-18 01:05:51.884963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.884978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.885017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 01:05:51.885036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.885047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.885057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.885067 | orchestrator | 2025-09-18 01:05:51.885076 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 01:05:51.885086 | orchestrator | Thursday 18 September 2025 01:01:00 +0000 (0:00:02.222) 0:03:40.784 **** 2025-09-18 01:05:51.885096 | orchestrator | 2025-09-18 01:05:51.885105 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 01:05:51.885115 | orchestrator | Thursday 18 September 2025 01:01:00 +0000 (0:00:00.243) 0:03:41.028 **** 2025-09-18 01:05:51.885125 | orchestrator | 2025-09-18 01:05:51.885134 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 01:05:51.885144 | orchestrator | Thursday 18 September 2025 01:01:01 +0000 (0:00:00.132) 0:03:41.160 **** 2025-09-18 01:05:51.885153 | orchestrator | 2025-09-18 01:05:51.885163 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-18 01:05:51.885173 | orchestrator | Thursday 18 September 2025 01:01:01 +0000 (0:00:00.120) 0:03:41.280 **** 2025-09-18 01:05:51.885182 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.885192 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.885201 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.885211 | orchestrator | 2025-09-18 01:05:51.885268 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-18 01:05:51.885285 | orchestrator | Thursday 18 September 2025 01:01:19 +0000 (0:00:18.400) 0:03:59.681 **** 2025-09-18 01:05:51.885295 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.885304 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.885314 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.885323 | orchestrator | 2025-09-18 01:05:51.885333 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-18 01:05:51.885342 | orchestrator | 2025-09-18 01:05:51.885352 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 01:05:51.885361 | orchestrator | Thursday 18 September 2025 01:01:27 +0000 (0:00:07.937) 0:04:07.618 **** 2025-09-18 01:05:51.885378 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.885388 | orchestrator | 2025-09-18 01:05:51.885398 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 01:05:51.885407 | orchestrator | Thursday 18 September 2025 01:01:29 +0000 (0:00:02.121) 0:04:09.739 **** 2025-09-18 01:05:51.885416 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.885426 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.885436 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.885445 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.885455 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.885464 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.885473 | orchestrator | 2025-09-18 01:05:51.885483 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-18 01:05:51.885493 | orchestrator | Thursday 18 September 2025 01:01:30 +0000 (0:00:00.553) 0:04:10.293 **** 2025-09-18 01:05:51.885502 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.885512 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.885521 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.885531 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:05:51.885541 | orchestrator | 2025-09-18 01:05:51.885550 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 01:05:51.885591 | orchestrator | Thursday 18 September 2025 01:01:32 +0000 (0:00:02.049) 0:04:12.343 **** 2025-09-18 01:05:51.885602 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-18 01:05:51.885612 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-18 01:05:51.885621 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-18 01:05:51.885631 | orchestrator | 2025-09-18 01:05:51.885641 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 01:05:51.885650 | orchestrator | Thursday 18 September 2025 01:01:33 +0000 (0:00:01.152) 0:04:13.495 **** 2025-09-18 01:05:51.885660 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-18 01:05:51.885670 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-18 01:05:51.885679 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-18 01:05:51.885689 | orchestrator | 2025-09-18 01:05:51.885699 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 01:05:51.885708 | orchestrator | Thursday 18 September 2025 01:01:34 +0000 (0:00:01.581) 0:04:15.077 **** 2025-09-18 01:05:51.885718 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-18 01:05:51.885727 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.885737 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-18 01:05:51.885746 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.885756 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-18 01:05:51.885765 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.885775 | orchestrator | 2025-09-18 01:05:51.885784 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-18 01:05:51.885793 | orchestrator | Thursday 18 September 2025 01:01:35 +0000 (0:00:00.781) 0:04:15.858 **** 2025-09-18 01:05:51.885801 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 01:05:51.885809 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 01:05:51.885817 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.885825 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 01:05:51.885833 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 01:05:51.885841 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.885848 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 01:05:51.885862 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 01:05:51.885869 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 01:05:51.885877 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 01:05:51.885885 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 01:05:51.885893 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.885901 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 01:05:51.885909 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 01:05:51.885916 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 01:05:51.885924 | orchestrator | 2025-09-18 01:05:51.885932 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-18 01:05:51.885940 | orchestrator | Thursday 18 September 2025 01:01:37 +0000 (0:00:01.403) 0:04:17.262 **** 2025-09-18 01:05:51.885948 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.885955 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.885963 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.885971 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.885979 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.885991 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.885999 | orchestrator | 2025-09-18 01:05:51.886007 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-18 01:05:51.886039 | orchestrator | Thursday 18 September 2025 01:01:39 +0000 (0:00:02.097) 0:04:19.360 **** 2025-09-18 01:05:51.886048 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.886056 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.886064 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.886072 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.886080 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.886088 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.886096 | orchestrator | 2025-09-18 01:05:51.886104 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-18 01:05:51.886112 | orchestrator | Thursday 18 September 2025 01:01:41 +0000 (0:00:02.166) 0:04:21.526 **** 2025-09-18 01:05:51.886121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886370 | orchestrator | 2025-09-18 01:05:51.886378 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 01:05:51.886386 | orchestrator | Thursday 18 September 2025 01:01:45 +0000 (0:00:04.014) 0:04:25.541 **** 2025-09-18 01:05:51.886394 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:51.886404 | orchestrator | 2025-09-18 01:05:51.886412 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-18 01:05:51.886420 | orchestrator | Thursday 18 September 2025 01:01:46 +0000 (0:00:01.221) 0:04:26.763 **** 2025-09-18 01:05:51.886428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.886625 | orchestrator | 2025-09-18 01:05:51.886633 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-18 01:05:51.886646 | orchestrator | Thursday 18 September 2025 01:01:50 +0000 (0:00:04.194) 0:04:30.957 **** 2025-09-18 01:05:51.886677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.886686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.886695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886703 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.886717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.886726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.886762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886772 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.886780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.886789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.886797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886805 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.886817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.886826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886839 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.886871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.886880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886889 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.886897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.886905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.886913 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.886921 | orchestrator | 2025-09-18 01:05:51.886929 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-18 01:05:51.886937 | orchestrator | Thursday 18 September 2025 01:01:52 +0000 (0:00:02.114) 0:04:33.072 **** 2025-09-18 01:05:51.886949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.886963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.886994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887004 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.887012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.887020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.887033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887041 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.887049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.887094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.887104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887112 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.887120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.887129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.887157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.887165 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.887173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.887213 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.887237 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.887246 | orchestrator | 2025-09-18 01:05:51.887254 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 01:05:51.887262 | orchestrator | Thursday 18 September 2025 01:01:55 +0000 (0:00:02.142) 0:04:35.215 **** 2025-09-18 01:05:51.887270 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.887277 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.887285 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.887293 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 01:05:51.887301 | orchestrator | 2025-09-18 01:05:51.887309 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-18 01:05:51.887316 | orchestrator | Thursday 18 September 2025 01:01:55 +0000 (0:00:00.921) 0:04:36.136 **** 2025-09-18 01:05:51.887324 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 01:05:51.887332 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 01:05:51.887340 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 01:05:51.887347 | orchestrator | 2025-09-18 01:05:51.887355 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-18 01:05:51.887363 | orchestrator | Thursday 18 September 2025 01:01:56 +0000 (0:00:00.877) 0:04:37.014 **** 2025-09-18 01:05:51.887370 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 01:05:51.887378 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 01:05:51.887386 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 01:05:51.887393 | orchestrator | 2025-09-18 01:05:51.887401 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-18 01:05:51.887409 | orchestrator | Thursday 18 September 2025 01:01:57 +0000 (0:00:00.784) 0:04:37.798 **** 2025-09-18 01:05:51.887417 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:05:51.887425 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:05:51.887438 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:05:51.887447 | orchestrator | 2025-09-18 01:05:51.887454 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-18 01:05:51.887462 | orchestrator | Thursday 18 September 2025 01:01:58 +0000 (0:00:00.461) 0:04:38.260 **** 2025-09-18 01:05:51.887470 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:05:51.887478 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:05:51.887486 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:05:51.887493 | orchestrator | 2025-09-18 01:05:51.887501 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-18 01:05:51.887509 | orchestrator | Thursday 18 September 2025 01:01:58 +0000 (0:00:00.614) 0:04:38.875 **** 2025-09-18 01:05:51.887517 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 01:05:51.887525 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 01:05:51.887533 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 01:05:51.887540 | orchestrator | 2025-09-18 01:05:51.887548 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-18 01:05:51.887556 | orchestrator | Thursday 18 September 2025 01:01:59 +0000 (0:00:01.076) 0:04:39.952 **** 2025-09-18 01:05:51.887564 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 01:05:51.887576 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 01:05:51.887584 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 01:05:51.887592 | orchestrator | 2025-09-18 01:05:51.887599 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-18 01:05:51.887607 | orchestrator | Thursday 18 September 2025 01:02:00 +0000 (0:00:01.157) 0:04:41.109 **** 2025-09-18 01:05:51.887615 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 01:05:51.887623 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 01:05:51.887631 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 01:05:51.887639 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-18 01:05:51.887646 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-18 01:05:51.887654 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-18 01:05:51.887662 | orchestrator | 2025-09-18 01:05:51.887669 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-18 01:05:51.887677 | orchestrator | Thursday 18 September 2025 01:02:05 +0000 (0:00:04.450) 0:04:45.560 **** 2025-09-18 01:05:51.887685 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.887693 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.887701 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.887708 | orchestrator | 2025-09-18 01:05:51.887716 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-18 01:05:51.887724 | orchestrator | Thursday 18 September 2025 01:02:05 +0000 (0:00:00.518) 0:04:46.079 **** 2025-09-18 01:05:51.887732 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.887739 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.887747 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.887755 | orchestrator | 2025-09-18 01:05:51.887762 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-18 01:05:51.887770 | orchestrator | Thursday 18 September 2025 01:02:06 +0000 (0:00:00.344) 0:04:46.423 **** 2025-09-18 01:05:51.887778 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.887786 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.887794 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.887801 | orchestrator | 2025-09-18 01:05:51.887833 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-18 01:05:51.887842 | orchestrator | Thursday 18 September 2025 01:02:07 +0000 (0:00:01.488) 0:04:47.912 **** 2025-09-18 01:05:51.887850 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 01:05:51.887865 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 01:05:51.887873 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 01:05:51.887881 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 01:05:51.887889 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 01:05:51.887897 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 01:05:51.887905 | orchestrator | 2025-09-18 01:05:51.887913 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-18 01:05:51.887921 | orchestrator | Thursday 18 September 2025 01:02:11 +0000 (0:00:03.605) 0:04:51.518 **** 2025-09-18 01:05:51.887929 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 01:05:51.887936 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 01:05:51.887944 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 01:05:51.887952 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 01:05:51.887960 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.887967 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 01:05:51.887975 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.887983 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 01:05:51.887991 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.887998 | orchestrator | 2025-09-18 01:05:51.888006 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-18 01:05:51.888014 | orchestrator | Thursday 18 September 2025 01:02:15 +0000 (0:00:03.799) 0:04:55.318 **** 2025-09-18 01:05:51.888022 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.888029 | orchestrator | 2025-09-18 01:05:51.888037 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-18 01:05:51.888045 | orchestrator | Thursday 18 September 2025 01:02:15 +0000 (0:00:00.127) 0:04:55.446 **** 2025-09-18 01:05:51.888053 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.888061 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.888068 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.888076 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888084 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888091 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.888099 | orchestrator | 2025-09-18 01:05:51.888107 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-18 01:05:51.888115 | orchestrator | Thursday 18 September 2025 01:02:15 +0000 (0:00:00.619) 0:04:56.065 **** 2025-09-18 01:05:51.888123 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 01:05:51.888130 | orchestrator | 2025-09-18 01:05:51.888138 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-18 01:05:51.888146 | orchestrator | Thursday 18 September 2025 01:02:16 +0000 (0:00:00.713) 0:04:56.779 **** 2025-09-18 01:05:51.888154 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.888162 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.888173 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.888181 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888189 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888197 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.888204 | orchestrator | 2025-09-18 01:05:51.888212 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-18 01:05:51.888220 | orchestrator | Thursday 18 September 2025 01:02:17 +0000 (0:00:00.877) 0:04:57.657 **** 2025-09-18 01:05:51.888266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888433 | orchestrator | 2025-09-18 01:05:51.888441 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-18 01:05:51.888449 | orchestrator | Thursday 18 September 2025 01:02:21 +0000 (0:00:03.826) 0:05:01.483 **** 2025-09-18 01:05:51.888458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.888466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.888483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.888491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.888504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.888513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.888521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.888620 | orchestrator | 2025-09-18 01:05:51.888628 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-18 01:05:51.888636 | orchestrator | Thursday 18 September 2025 01:02:27 +0000 (0:00:06.636) 0:05:08.120 **** 2025-09-18 01:05:51.888643 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.888650 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.888657 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.888664 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888670 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888677 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.888683 | orchestrator | 2025-09-18 01:05:51.888690 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-18 01:05:51.888697 | orchestrator | Thursday 18 September 2025 01:02:29 +0000 (0:00:01.605) 0:05:09.726 **** 2025-09-18 01:05:51.888704 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 01:05:51.888710 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 01:05:51.888717 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 01:05:51.888723 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 01:05:51.888733 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 01:05:51.888740 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888747 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 01:05:51.888754 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 01:05:51.888760 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888767 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 01:05:51.888774 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 01:05:51.888780 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.888787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 01:05:51.888794 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 01:05:51.888800 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 01:05:51.888807 | orchestrator | 2025-09-18 01:05:51.888813 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-18 01:05:51.888820 | orchestrator | Thursday 18 September 2025 01:02:33 +0000 (0:00:04.112) 0:05:13.839 **** 2025-09-18 01:05:51.888827 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.888833 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.888845 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.888851 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888858 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888865 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.888871 | orchestrator | 2025-09-18 01:05:51.888878 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-18 01:05:51.888885 | orchestrator | Thursday 18 September 2025 01:02:34 +0000 (0:00:00.742) 0:05:14.581 **** 2025-09-18 01:05:51.888892 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 01:05:51.888898 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 01:05:51.888905 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 01:05:51.888912 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 01:05:51.888919 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 01:05:51.888925 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.888932 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 01:05:51.888939 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.888945 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.888955 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.888962 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.888969 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.888975 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.888982 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.888989 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.888995 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 01:05:51.889002 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889009 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.889015 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.889022 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.889029 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 01:05:51.889035 | orchestrator | 2025-09-18 01:05:51.889042 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-18 01:05:51.889049 | orchestrator | Thursday 18 September 2025 01:02:41 +0000 (0:00:06.796) 0:05:21.381 **** 2025-09-18 01:05:51.889055 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 01:05:51.889062 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 01:05:51.889072 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 01:05:51.889078 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 01:05:51.889089 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 01:05:51.889096 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 01:05:51.889103 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 01:05:51.889109 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 01:05:51.889116 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 01:05:51.889122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 01:05:51.889129 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 01:05:51.889136 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 01:05:51.889142 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 01:05:51.889149 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 01:05:51.889156 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 01:05:51.889162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 01:05:51.889169 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889176 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 01:05:51.889182 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889189 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 01:05:51.889196 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889202 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 01:05:51.889209 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 01:05:51.889216 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 01:05:51.889233 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 01:05:51.889241 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 01:05:51.889247 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 01:05:51.889254 | orchestrator | 2025-09-18 01:05:51.889260 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-18 01:05:51.889267 | orchestrator | Thursday 18 September 2025 01:02:50 +0000 (0:00:08.980) 0:05:30.362 **** 2025-09-18 01:05:51.889274 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.889280 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.889287 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.889294 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889300 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889307 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889313 | orchestrator | 2025-09-18 01:05:51.889320 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-18 01:05:51.889330 | orchestrator | Thursday 18 September 2025 01:02:50 +0000 (0:00:00.651) 0:05:31.013 **** 2025-09-18 01:05:51.889337 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.889343 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.889350 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.889357 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889363 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889370 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889376 | orchestrator | 2025-09-18 01:05:51.889383 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-18 01:05:51.889395 | orchestrator | Thursday 18 September 2025 01:02:51 +0000 (0:00:00.536) 0:05:31.550 **** 2025-09-18 01:05:51.889401 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889408 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889415 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889421 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.889428 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.889434 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.889441 | orchestrator | 2025-09-18 01:05:51.889448 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-18 01:05:51.889454 | orchestrator | Thursday 18 September 2025 01:02:53 +0000 (0:00:01.896) 0:05:33.446 **** 2025-09-18 01:05:51.889465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.889472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.889479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889487 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.889494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.889504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.889516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889523 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.889534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 01:05:51.889541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 01:05:51.889548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889555 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.889566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.889578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889585 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.889602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889610 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 01:05:51.889624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 01:05:51.889631 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889638 | orchestrator | 2025-09-18 01:05:51.889644 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-18 01:05:51.889651 | orchestrator | Thursday 18 September 2025 01:02:54 +0000 (0:00:01.413) 0:05:34.860 **** 2025-09-18 01:05:51.889658 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-18 01:05:51.889665 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889676 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.889682 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-18 01:05:51.889689 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889696 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.889702 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-18 01:05:51.889709 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889715 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.889722 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-18 01:05:51.889729 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889735 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-18 01:05:51.889742 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889749 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889759 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.889765 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-18 01:05:51.889772 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-18 01:05:51.889779 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.889785 | orchestrator | 2025-09-18 01:05:51.889792 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-18 01:05:51.889799 | orchestrator | Thursday 18 September 2025 01:02:55 +0000 (0:00:00.905) 0:05:35.766 **** 2025-09-18 01:05:51.889806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 01:05:51.889949 | orchestrator | 2025-09-18 01:05:51.889956 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 01:05:51.889962 | orchestrator | Thursday 18 September 2025 01:02:58 +0000 (0:00:03.279) 0:05:39.045 **** 2025-09-18 01:05:51.889969 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.889976 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.889983 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.889989 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.889996 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.890002 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.890009 | orchestrator | 2025-09-18 01:05:51.890037 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890044 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.548) 0:05:39.594 **** 2025-09-18 01:05:51.890051 | orchestrator | 2025-09-18 01:05:51.890058 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890064 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.098) 0:05:39.693 **** 2025-09-18 01:05:51.890071 | orchestrator | 2025-09-18 01:05:51.890078 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890085 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.100) 0:05:39.794 **** 2025-09-18 01:05:51.890091 | orchestrator | 2025-09-18 01:05:51.890098 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890105 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.098) 0:05:39.893 **** 2025-09-18 01:05:51.890111 | orchestrator | 2025-09-18 01:05:51.890118 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890124 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.102) 0:05:39.995 **** 2025-09-18 01:05:51.890131 | orchestrator | 2025-09-18 01:05:51.890138 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 01:05:51.890144 | orchestrator | Thursday 18 September 2025 01:02:59 +0000 (0:00:00.095) 0:05:40.091 **** 2025-09-18 01:05:51.890151 | orchestrator | 2025-09-18 01:05:51.890158 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-18 01:05:51.890164 | orchestrator | Thursday 18 September 2025 01:03:00 +0000 (0:00:00.186) 0:05:40.278 **** 2025-09-18 01:05:51.890171 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.890178 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.890184 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.890191 | orchestrator | 2025-09-18 01:05:51.890202 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-18 01:05:51.890208 | orchestrator | Thursday 18 September 2025 01:03:11 +0000 (0:00:11.836) 0:05:52.114 **** 2025-09-18 01:05:51.890215 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.890235 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.890242 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.890249 | orchestrator | 2025-09-18 01:05:51.890256 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-18 01:05:51.890263 | orchestrator | Thursday 18 September 2025 01:03:23 +0000 (0:00:11.824) 0:06:03.938 **** 2025-09-18 01:05:51.890269 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.890276 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.890283 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.890289 | orchestrator | 2025-09-18 01:05:51.890296 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-18 01:05:51.890303 | orchestrator | Thursday 18 September 2025 01:03:42 +0000 (0:00:18.597) 0:06:22.535 **** 2025-09-18 01:05:51.890309 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.890316 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.890323 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.890329 | orchestrator | 2025-09-18 01:05:51.890336 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-18 01:05:51.890348 | orchestrator | Thursday 18 September 2025 01:04:16 +0000 (0:00:33.687) 0:06:56.223 **** 2025-09-18 01:05:51.890355 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.890361 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.890368 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.890375 | orchestrator | 2025-09-18 01:05:51.890382 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-18 01:05:51.890388 | orchestrator | Thursday 18 September 2025 01:04:16 +0000 (0:00:00.765) 0:06:56.989 **** 2025-09-18 01:05:51.890395 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.890402 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.890408 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.890415 | orchestrator | 2025-09-18 01:05:51.890422 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-18 01:05:51.890432 | orchestrator | Thursday 18 September 2025 01:04:17 +0000 (0:00:00.741) 0:06:57.730 **** 2025-09-18 01:05:51.890439 | orchestrator | changed: [testbed-node-4] 2025-09-18 01:05:51.890446 | orchestrator | changed: [testbed-node-5] 2025-09-18 01:05:51.890452 | orchestrator | changed: [testbed-node-3] 2025-09-18 01:05:51.890459 | orchestrator | 2025-09-18 01:05:51.890466 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-18 01:05:51.890472 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:24.970) 0:07:22.701 **** 2025-09-18 01:05:51.890479 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.890486 | orchestrator | 2025-09-18 01:05:51.890492 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-18 01:05:51.890499 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:00.146) 0:07:22.847 **** 2025-09-18 01:05:51.890505 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.890512 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.890519 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.890525 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.890532 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.890539 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-18 01:05:51.890546 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 01:05:51.890552 | orchestrator | 2025-09-18 01:05:51.890559 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-18 01:05:51.890566 | orchestrator | Thursday 18 September 2025 01:05:04 +0000 (0:00:21.846) 0:07:44.693 **** 2025-09-18 01:05:51.890572 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.890579 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.890585 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.890592 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.890599 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.890605 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.890612 | orchestrator | 2025-09-18 01:05:51.890618 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-18 01:05:51.890625 | orchestrator | Thursday 18 September 2025 01:05:13 +0000 (0:00:08.681) 0:07:53.374 **** 2025-09-18 01:05:51.890632 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.890638 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.890645 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.890651 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.890658 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.890665 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-18 01:05:51.890671 | orchestrator | 2025-09-18 01:05:51.890678 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 01:05:51.890685 | orchestrator | Thursday 18 September 2025 01:05:17 +0000 (0:00:04.151) 0:07:57.526 **** 2025-09-18 01:05:51.890691 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 01:05:51.890702 | orchestrator | 2025-09-18 01:05:51.890709 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 01:05:51.890716 | orchestrator | Thursday 18 September 2025 01:05:30 +0000 (0:00:13.216) 0:08:10.742 **** 2025-09-18 01:05:51.890723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 01:05:51.890729 | orchestrator | 2025-09-18 01:05:51.890736 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-18 01:05:51.890746 | orchestrator | Thursday 18 September 2025 01:05:31 +0000 (0:00:01.260) 0:08:12.002 **** 2025-09-18 01:05:51.890757 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.890768 | orchestrator | 2025-09-18 01:05:51.890778 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-18 01:05:51.890795 | orchestrator | Thursday 18 September 2025 01:05:33 +0000 (0:00:01.303) 0:08:13.306 **** 2025-09-18 01:05:51.890812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 01:05:51.890824 | orchestrator | 2025-09-18 01:05:51.890836 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-18 01:05:51.890847 | orchestrator | Thursday 18 September 2025 01:05:44 +0000 (0:00:11.579) 0:08:24.885 **** 2025-09-18 01:05:51.890858 | orchestrator | ok: [testbed-node-3] 2025-09-18 01:05:51.890869 | orchestrator | ok: [testbed-node-4] 2025-09-18 01:05:51.890880 | orchestrator | ok: [testbed-node-5] 2025-09-18 01:05:51.890891 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:51.890903 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:51.890914 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:51.890925 | orchestrator | 2025-09-18 01:05:51.890932 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-18 01:05:51.890939 | orchestrator | 2025-09-18 01:05:51.890946 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-18 01:05:51.890952 | orchestrator | Thursday 18 September 2025 01:05:46 +0000 (0:00:01.925) 0:08:26.810 **** 2025-09-18 01:05:51.890959 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:51.890966 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:51.890973 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:51.890979 | orchestrator | 2025-09-18 01:05:51.890986 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-18 01:05:51.890992 | orchestrator | 2025-09-18 01:05:51.890999 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-18 01:05:51.891006 | orchestrator | Thursday 18 September 2025 01:05:47 +0000 (0:00:01.116) 0:08:27.926 **** 2025-09-18 01:05:51.891013 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.891019 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.891026 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.891032 | orchestrator | 2025-09-18 01:05:51.891039 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-18 01:05:51.891046 | orchestrator | 2025-09-18 01:05:51.891053 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-18 01:05:51.891059 | orchestrator | Thursday 18 September 2025 01:05:48 +0000 (0:00:00.550) 0:08:28.477 **** 2025-09-18 01:05:51.891066 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-18 01:05:51.891078 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-18 01:05:51.891085 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891092 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-18 01:05:51.891098 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-18 01:05:51.891105 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891112 | orchestrator | skipping: [testbed-node-3] 2025-09-18 01:05:51.891118 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-18 01:05:51.891125 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-18 01:05:51.891131 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891144 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-18 01:05:51.891151 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-18 01:05:51.891158 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891164 | orchestrator | skipping: [testbed-node-4] 2025-09-18 01:05:51.891171 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-18 01:05:51.891177 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-18 01:05:51.891184 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891191 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-18 01:05:51.891197 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-18 01:05:51.891204 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891210 | orchestrator | skipping: [testbed-node-5] 2025-09-18 01:05:51.891217 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-18 01:05:51.891260 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-18 01:05:51.891267 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891274 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-18 01:05:51.891280 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-18 01:05:51.891287 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891293 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.891300 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-18 01:05:51.891307 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-18 01:05:51.891313 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891320 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-18 01:05:51.891327 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-18 01:05:51.891333 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891340 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.891346 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-18 01:05:51.891353 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-18 01:05:51.891359 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-18 01:05:51.891366 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-18 01:05:51.891373 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-18 01:05:51.891379 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-18 01:05:51.891386 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.891392 | orchestrator | 2025-09-18 01:05:51.891399 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-18 01:05:51.891405 | orchestrator | 2025-09-18 01:05:51.891416 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-18 01:05:51.891423 | orchestrator | Thursday 18 September 2025 01:05:49 +0000 (0:00:01.253) 0:08:29.731 **** 2025-09-18 01:05:51.891429 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-18 01:05:51.891436 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-18 01:05:51.891443 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.891450 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-18 01:05:51.891456 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-18 01:05:51.891463 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.891469 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-18 01:05:51.891476 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-18 01:05:51.891483 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.891489 | orchestrator | 2025-09-18 01:05:51.891501 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-18 01:05:51.891508 | orchestrator | 2025-09-18 01:05:51.891515 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-18 01:05:51.891521 | orchestrator | Thursday 18 September 2025 01:05:50 +0000 (0:00:00.719) 0:08:30.450 **** 2025-09-18 01:05:51.891528 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.891534 | orchestrator | 2025-09-18 01:05:51.891541 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-18 01:05:51.891548 | orchestrator | 2025-09-18 01:05:51.891554 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-18 01:05:51.891561 | orchestrator | Thursday 18 September 2025 01:05:50 +0000 (0:00:00.682) 0:08:31.133 **** 2025-09-18 01:05:51.891568 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:51.891574 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:51.891581 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:51.891587 | orchestrator | 2025-09-18 01:05:51.891594 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:05:51.891601 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 01:05:51.891612 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-18 01:05:51.891619 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-18 01:05:51.891626 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-18 01:05:51.891633 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 01:05:51.891640 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-18 01:05:51.891646 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-18 01:05:51.891653 | orchestrator | 2025-09-18 01:05:51.891659 | orchestrator | 2025-09-18 01:05:51.891666 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:05:51.891673 | orchestrator | Thursday 18 September 2025 01:05:51 +0000 (0:00:00.423) 0:08:31.556 **** 2025-09-18 01:05:51.891679 | orchestrator | =============================================================================== 2025-09-18 01:05:51.891686 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 33.69s 2025-09-18 01:05:51.891693 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.08s 2025-09-18 01:05:51.891699 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.97s 2025-09-18 01:05:51.891706 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.85s 2025-09-18 01:05:51.891712 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.18s 2025-09-18 01:05:51.891719 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.36s 2025-09-18 01:05:51.891725 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.60s 2025-09-18 01:05:51.891732 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.40s 2025-09-18 01:05:51.891739 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.76s 2025-09-18 01:05:51.891745 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.47s 2025-09-18 01:05:51.891752 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.22s 2025-09-18 01:05:51.891758 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.44s 2025-09-18 01:05:51.891769 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.84s 2025-09-18 01:05:51.891776 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.82s 2025-09-18 01:05:51.891783 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.58s 2025-09-18 01:05:51.891789 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.37s 2025-09-18 01:05:51.891795 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.98s 2025-09-18 01:05:51.891804 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.77s 2025-09-18 01:05:51.891811 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.68s 2025-09-18 01:05:51.891817 | orchestrator | nova : Restart nova-api container --------------------------------------- 7.94s 2025-09-18 01:05:51.891823 | orchestrator | 2025-09-18 01:05:51 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:51.891829 | orchestrator | 2025-09-18 01:05:51 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:51.891836 | orchestrator | 2025-09-18 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:54.924955 | orchestrator | 2025-09-18 01:05:54 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state STARTED 2025-09-18 01:05:54.926388 | orchestrator | 2025-09-18 01:05:54 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:54.926726 | orchestrator | 2025-09-18 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:05:57.971818 | orchestrator | 2025-09-18 01:05:57.971916 | orchestrator | 2025-09-18 01:05:57.971930 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:05:57.971943 | orchestrator | 2025-09-18 01:05:57.971954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:05:57.971966 | orchestrator | Thursday 18 September 2025 01:03:45 +0000 (0:00:00.589) 0:00:00.589 **** 2025-09-18 01:05:57.971977 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:57.971989 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:57.972000 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:57.972010 | orchestrator | 2025-09-18 01:05:57.972021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:05:57.972032 | orchestrator | Thursday 18 September 2025 01:03:45 +0000 (0:00:00.386) 0:00:00.976 **** 2025-09-18 01:05:57.972043 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-18 01:05:57.972054 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-18 01:05:57.972065 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-18 01:05:57.972076 | orchestrator | 2025-09-18 01:05:57.972087 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-18 01:05:57.972098 | orchestrator | 2025-09-18 01:05:57.972844 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-18 01:05:57.972873 | orchestrator | Thursday 18 September 2025 01:03:45 +0000 (0:00:00.563) 0:00:01.539 **** 2025-09-18 01:05:57.972886 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:57.972899 | orchestrator | 2025-09-18 01:05:57.972912 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-18 01:05:57.972925 | orchestrator | Thursday 18 September 2025 01:03:46 +0000 (0:00:00.841) 0:00:02.380 **** 2025-09-18 01:05:57.972940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.972984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.972998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973010 | orchestrator | 2025-09-18 01:05:57.973037 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-18 01:05:57.973049 | orchestrator | Thursday 18 September 2025 01:03:47 +0000 (0:00:01.019) 0:00:03.400 **** 2025-09-18 01:05:57.973061 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-18 01:05:57.973072 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-18 01:05:57.973083 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:05:57.973094 | orchestrator | 2025-09-18 01:05:57.973105 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-18 01:05:57.973116 | orchestrator | Thursday 18 September 2025 01:03:48 +0000 (0:00:00.595) 0:00:03.996 **** 2025-09-18 01:05:57.973127 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:05:57.973138 | orchestrator | 2025-09-18 01:05:57.973149 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-18 01:05:57.973160 | orchestrator | Thursday 18 September 2025 01:03:49 +0000 (0:00:00.621) 0:00:04.617 **** 2025-09-18 01:05:57.973250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973303 | orchestrator | 2025-09-18 01:05:57.973315 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-18 01:05:57.973326 | orchestrator | Thursday 18 September 2025 01:03:50 +0000 (0:00:01.610) 0:00:06.228 **** 2025-09-18 01:05:57.973337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973368 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.973380 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.973427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973441 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.973453 | orchestrator | 2025-09-18 01:05:57.973465 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-18 01:05:57.973476 | orchestrator | Thursday 18 September 2025 01:03:51 +0000 (0:00:00.371) 0:00:06.599 **** 2025-09-18 01:05:57.973488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973510 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.973525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973539 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.973553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 01:05:57.973566 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.973579 | orchestrator | 2025-09-18 01:05:57.973592 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-18 01:05:57.973605 | orchestrator | Thursday 18 September 2025 01:03:52 +0000 (0:00:01.079) 0:00:07.678 **** 2025-09-18 01:05:57.973618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973710 | orchestrator | 2025-09-18 01:05:57.973725 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-18 01:05:57.973738 | orchestrator | Thursday 18 September 2025 01:03:53 +0000 (0:00:01.324) 0:00:09.002 **** 2025-09-18 01:05:57.973751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.973793 | orchestrator | 2025-09-18 01:05:57.973806 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-18 01:05:57.973821 | orchestrator | Thursday 18 September 2025 01:03:54 +0000 (0:00:01.292) 0:00:10.295 **** 2025-09-18 01:05:57.973834 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.973847 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.973858 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.973868 | orchestrator | 2025-09-18 01:05:57.973880 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-18 01:05:57.973891 | orchestrator | Thursday 18 September 2025 01:03:55 +0000 (0:00:00.497) 0:00:10.792 **** 2025-09-18 01:05:57.973901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 01:05:57.973918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 01:05:57.973929 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 01:05:57.973940 | orchestrator | 2025-09-18 01:05:57.973951 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-18 01:05:57.973961 | orchestrator | Thursday 18 September 2025 01:03:56 +0000 (0:00:01.115) 0:00:11.908 **** 2025-09-18 01:05:57.973973 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 01:05:57.973984 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 01:05:57.974002 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 01:05:57.974059 | orchestrator | 2025-09-18 01:05:57.974074 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-18 01:05:57.974086 | orchestrator | Thursday 18 September 2025 01:03:57 +0000 (0:00:01.301) 0:00:13.209 **** 2025-09-18 01:05:57.974133 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 01:05:57.974146 | orchestrator | 2025-09-18 01:05:57.974157 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-18 01:05:57.974168 | orchestrator | Thursday 18 September 2025 01:03:58 +0000 (0:00:00.754) 0:00:13.963 **** 2025-09-18 01:05:57.974179 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-18 01:05:57.974190 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-18 01:05:57.974201 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:57.974212 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:05:57.974246 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:05:57.974259 | orchestrator | 2025-09-18 01:05:57.974270 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-18 01:05:57.974281 | orchestrator | Thursday 18 September 2025 01:03:59 +0000 (0:00:00.717) 0:00:14.681 **** 2025-09-18 01:05:57.974292 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.974303 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.974314 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.974325 | orchestrator | 2025-09-18 01:05:57.974336 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-18 01:05:57.974347 | orchestrator | Thursday 18 September 2025 01:03:59 +0000 (0:00:00.509) 0:00:15.190 **** 2025-09-18 01:05:57.974359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090185, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7769706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090185, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7769706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090185, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7769706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090229, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7974446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090229, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7974446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090229, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7974446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090196, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7799706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090196, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7799706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090196, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7799706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090234, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8015344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090234, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8015344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090234, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8015344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090208, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7862017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090208, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7862017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090208, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7862017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090220, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7949646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090220, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7949646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090220, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7949646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090184, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.75201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090184, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.75201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090184, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.75201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090194, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7779706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090194, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7779706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090194, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7779706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'val2025-09-18 01:05:57 | INFO  | Task 458f6546-39cf-41dd-860f-15a1b54dd37b is in state SUCCESS 2025-09-18 01:05:57.974859 | orchestrator | ue': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090197, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7809706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090197, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7809706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090197, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7809706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090215, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7885432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090215, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7885432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090215, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7885432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090227, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.974991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090227, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090227, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090195, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7789705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090195, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7789705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090195, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7789705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090218, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7937093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090218, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7937093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090218, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7937093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090209, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7881505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090209, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7881505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090209, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7881505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090204, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7855828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090204, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7855828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090204, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7855828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090200, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.78465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090200, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.78465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090200, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.78465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090217, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7919707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090217, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7919707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090217, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7919707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090198, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7819705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090198, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7819705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090198, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7819705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090225, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090225, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090225, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.7957175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090453, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8889723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090453, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8889723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090453, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8889723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090265, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090265, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090265, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090248, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8055012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090248, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8055012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090248, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8055012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090334, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8510802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090334, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8510802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090334, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8510802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090243, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8030674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090243, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8030674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090371, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8769722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090243, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8030674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090371, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8769722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090338, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8595827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090371, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8769722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090338, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8595827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090419, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090338, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8595827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090419, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090445, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8876672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090419, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090445, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8876672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090368, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8612852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090445, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8876672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090368, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8612852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090323, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8479717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090368, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8612852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090323, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8479717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090254, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8203228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090323, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8479717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090254, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8203228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090321, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090254, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8203228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.975989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090321, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090250, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.80712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090321, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8459716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090250, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.80712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090329, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8505082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090250, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.80712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090329, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8505082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090433, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8867688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090329, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8505082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090433, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8867688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090424, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8844032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090433, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8867688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090424, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8844032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090244, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8042815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090424, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8844032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090244, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8042815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090246, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8047473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090246, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8047473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090244, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8042815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090365, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8603117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090365, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8603117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090246, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8047473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090422, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090422, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090365, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.8603117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090422, 'dev': 139, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758154477.881038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 01:05:57.976356 | orchestrator | 2025-09-18 01:05:57.976367 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-18 01:05:57.976378 | orchestrator | Thursday 18 September 2025 01:04:36 +0000 (0:00:37.277) 0:00:52.468 **** 2025-09-18 01:05:57.976389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.976400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.976417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 01:05:57.976428 | orchestrator | 2025-09-18 01:05:57.976443 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-18 01:05:57.976453 | orchestrator | Thursday 18 September 2025 01:04:37 +0000 (0:00:00.922) 0:00:53.391 **** 2025-09-18 01:05:57.976463 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:57.976473 | orchestrator | 2025-09-18 01:05:57.976482 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-18 01:05:57.976492 | orchestrator | Thursday 18 September 2025 01:04:40 +0000 (0:00:02.230) 0:00:55.621 **** 2025-09-18 01:05:57.976502 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:57.976512 | orchestrator | 2025-09-18 01:05:57.976521 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 01:05:57.976536 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:02.159) 0:00:57.781 **** 2025-09-18 01:05:57.976547 | orchestrator | 2025-09-18 01:05:57.976557 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 01:05:57.976567 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:00.060) 0:00:57.841 **** 2025-09-18 01:05:57.976576 | orchestrator | 2025-09-18 01:05:57.976586 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 01:05:57.976596 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:00.063) 0:00:57.905 **** 2025-09-18 01:05:57.976605 | orchestrator | 2025-09-18 01:05:57.976615 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-18 01:05:57.976624 | orchestrator | Thursday 18 September 2025 01:04:42 +0000 (0:00:00.208) 0:00:58.114 **** 2025-09-18 01:05:57.976634 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.976643 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.976653 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:05:57.976662 | orchestrator | 2025-09-18 01:05:57.976672 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-18 01:05:57.976682 | orchestrator | Thursday 18 September 2025 01:04:44 +0000 (0:00:02.140) 0:01:00.254 **** 2025-09-18 01:05:57.976691 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.976701 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.976710 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-18 01:05:57.976721 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-18 01:05:57.976730 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-18 01:05:57.976740 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:57.976749 | orchestrator | 2025-09-18 01:05:57.976759 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-18 01:05:57.976768 | orchestrator | Thursday 18 September 2025 01:05:24 +0000 (0:00:39.936) 0:01:40.191 **** 2025-09-18 01:05:57.976778 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.976788 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:05:57.976797 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:05:57.976807 | orchestrator | 2025-09-18 01:05:57.976816 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-18 01:05:57.976826 | orchestrator | Thursday 18 September 2025 01:05:50 +0000 (0:00:25.638) 0:02:05.829 **** 2025-09-18 01:05:57.976836 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:05:57.976845 | orchestrator | 2025-09-18 01:05:57.976855 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-18 01:05:57.976864 | orchestrator | Thursday 18 September 2025 01:05:52 +0000 (0:00:02.529) 0:02:08.359 **** 2025-09-18 01:05:57.976874 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.976884 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:05:57.976893 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:05:57.976902 | orchestrator | 2025-09-18 01:05:57.976912 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-18 01:05:57.976922 | orchestrator | Thursday 18 September 2025 01:05:53 +0000 (0:00:00.557) 0:02:08.916 **** 2025-09-18 01:05:57.976938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-18 01:05:57.976950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-18 01:05:57.976960 | orchestrator | 2025-09-18 01:05:57.976970 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-18 01:05:57.976979 | orchestrator | Thursday 18 September 2025 01:05:55 +0000 (0:00:02.595) 0:02:11.512 **** 2025-09-18 01:05:57.976989 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:05:57.976999 | orchestrator | 2025-09-18 01:05:57.977008 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:05:57.977018 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:05:57.977033 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:05:57.977043 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:05:57.977052 | orchestrator | 2025-09-18 01:05:57.977062 | orchestrator | 2025-09-18 01:05:57.977072 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:05:57.977081 | orchestrator | Thursday 18 September 2025 01:05:56 +0000 (0:00:00.256) 0:02:11.769 **** 2025-09-18 01:05:57.977091 | orchestrator | =============================================================================== 2025-09-18 01:05:57.977100 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.93s 2025-09-18 01:05:57.977110 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.28s 2025-09-18 01:05:57.977120 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.64s 2025-09-18 01:05:57.977135 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.60s 2025-09-18 01:05:57.977145 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.53s 2025-09-18 01:05:57.977155 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.23s 2025-09-18 01:05:57.977164 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.16s 2025-09-18 01:05:57.977174 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.14s 2025-09-18 01:05:57.977184 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.61s 2025-09-18 01:05:57.977193 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2025-09-18 01:05:57.977202 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-09-18 01:05:57.977212 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.29s 2025-09-18 01:05:57.977270 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.12s 2025-09-18 01:05:57.977281 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.08s 2025-09-18 01:05:57.977291 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.02s 2025-09-18 01:05:57.977302 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.92s 2025-09-18 01:05:57.977313 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.84s 2025-09-18 01:05:57.977323 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2025-09-18 01:05:57.977341 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2025-09-18 01:05:57.977351 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-09-18 01:05:57.977362 | orchestrator | 2025-09-18 01:05:57 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:05:57.977373 | orchestrator | 2025-09-18 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:01.015870 | orchestrator | 2025-09-18 01:06:01 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:01.015968 | orchestrator | 2025-09-18 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:04.062332 | orchestrator | 2025-09-18 01:06:04 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:04.062416 | orchestrator | 2025-09-18 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:07.104767 | orchestrator | 2025-09-18 01:06:07 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:07.104867 | orchestrator | 2025-09-18 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:10.136269 | orchestrator | 2025-09-18 01:06:10 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:10.136369 | orchestrator | 2025-09-18 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:13.179313 | orchestrator | 2025-09-18 01:06:13 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:13.179416 | orchestrator | 2025-09-18 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:16.215916 | orchestrator | 2025-09-18 01:06:16 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:16.216025 | orchestrator | 2025-09-18 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:19.265980 | orchestrator | 2025-09-18 01:06:19 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:19.266241 | orchestrator | 2025-09-18 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:22.307508 | orchestrator | 2025-09-18 01:06:22 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:22.307644 | orchestrator | 2025-09-18 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:25.351244 | orchestrator | 2025-09-18 01:06:25 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:25.351375 | orchestrator | 2025-09-18 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:28.398781 | orchestrator | 2025-09-18 01:06:28 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:28.398912 | orchestrator | 2025-09-18 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:31.441711 | orchestrator | 2025-09-18 01:06:31 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:31.441844 | orchestrator | 2025-09-18 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:34.490643 | orchestrator | 2025-09-18 01:06:34 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:34.490767 | orchestrator | 2025-09-18 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:37.532926 | orchestrator | 2025-09-18 01:06:37 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:37.533029 | orchestrator | 2025-09-18 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:40.577606 | orchestrator | 2025-09-18 01:06:40 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:40.577700 | orchestrator | 2025-09-18 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:43.615439 | orchestrator | 2025-09-18 01:06:43 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:43.615545 | orchestrator | 2025-09-18 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:46.657364 | orchestrator | 2025-09-18 01:06:46 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:46.657477 | orchestrator | 2025-09-18 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:49.699499 | orchestrator | 2025-09-18 01:06:49 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:49.700322 | orchestrator | 2025-09-18 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:52.738977 | orchestrator | 2025-09-18 01:06:52 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:52.739088 | orchestrator | 2025-09-18 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:55.777613 | orchestrator | 2025-09-18 01:06:55 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:55.777718 | orchestrator | 2025-09-18 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:06:58.824248 | orchestrator | 2025-09-18 01:06:58 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:06:58.824332 | orchestrator | 2025-09-18 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:01.859372 | orchestrator | 2025-09-18 01:07:01 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:01.860155 | orchestrator | 2025-09-18 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:04.890367 | orchestrator | 2025-09-18 01:07:04 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:04.890485 | orchestrator | 2025-09-18 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:07.935849 | orchestrator | 2025-09-18 01:07:07 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:07.935946 | orchestrator | 2025-09-18 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:10.975524 | orchestrator | 2025-09-18 01:07:10 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:10.975620 | orchestrator | 2025-09-18 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:14.024619 | orchestrator | 2025-09-18 01:07:14 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:14.024722 | orchestrator | 2025-09-18 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:17.058822 | orchestrator | 2025-09-18 01:07:17 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:17.058933 | orchestrator | 2025-09-18 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:20.116306 | orchestrator | 2025-09-18 01:07:20 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:20.116388 | orchestrator | 2025-09-18 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:23.151855 | orchestrator | 2025-09-18 01:07:23 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:23.151998 | orchestrator | 2025-09-18 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:26.192573 | orchestrator | 2025-09-18 01:07:26 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:26.192684 | orchestrator | 2025-09-18 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:29.231647 | orchestrator | 2025-09-18 01:07:29 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:29.231724 | orchestrator | 2025-09-18 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:32.270339 | orchestrator | 2025-09-18 01:07:32 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:32.270442 | orchestrator | 2025-09-18 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:35.310835 | orchestrator | 2025-09-18 01:07:35 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:35.310943 | orchestrator | 2025-09-18 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:38.355079 | orchestrator | 2025-09-18 01:07:38 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:38.355198 | orchestrator | 2025-09-18 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:41.394325 | orchestrator | 2025-09-18 01:07:41 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:41.394435 | orchestrator | 2025-09-18 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:44.426222 | orchestrator | 2025-09-18 01:07:44 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:44.426332 | orchestrator | 2025-09-18 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:47.468499 | orchestrator | 2025-09-18 01:07:47 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:47.468606 | orchestrator | 2025-09-18 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:50.510078 | orchestrator | 2025-09-18 01:07:50 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:50.510201 | orchestrator | 2025-09-18 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:53.556700 | orchestrator | 2025-09-18 01:07:53 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:53.556806 | orchestrator | 2025-09-18 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:56.600296 | orchestrator | 2025-09-18 01:07:56 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:56.600403 | orchestrator | 2025-09-18 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:07:59.642966 | orchestrator | 2025-09-18 01:07:59 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:07:59.643072 | orchestrator | 2025-09-18 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:02.685203 | orchestrator | 2025-09-18 01:08:02 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:02.685304 | orchestrator | 2025-09-18 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:05.731219 | orchestrator | 2025-09-18 01:08:05 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:05.731326 | orchestrator | 2025-09-18 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:08.772686 | orchestrator | 2025-09-18 01:08:08 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:08.772815 | orchestrator | 2025-09-18 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:11.818431 | orchestrator | 2025-09-18 01:08:11 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:11.818541 | orchestrator | 2025-09-18 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:14.861056 | orchestrator | 2025-09-18 01:08:14 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:14.861210 | orchestrator | 2025-09-18 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:17.901015 | orchestrator | 2025-09-18 01:08:17 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:17.901298 | orchestrator | 2025-09-18 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:20.942369 | orchestrator | 2025-09-18 01:08:20 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:20.942473 | orchestrator | 2025-09-18 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:23.991234 | orchestrator | 2025-09-18 01:08:23 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:23.991342 | orchestrator | 2025-09-18 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:27.027657 | orchestrator | 2025-09-18 01:08:27 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:27.027765 | orchestrator | 2025-09-18 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:30.071785 | orchestrator | 2025-09-18 01:08:30 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:30.071885 | orchestrator | 2025-09-18 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:33.113279 | orchestrator | 2025-09-18 01:08:33 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:33.113384 | orchestrator | 2025-09-18 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:36.163595 | orchestrator | 2025-09-18 01:08:36 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:36.163700 | orchestrator | 2025-09-18 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:39.194968 | orchestrator | 2025-09-18 01:08:39 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:39.195058 | orchestrator | 2025-09-18 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:42.242826 | orchestrator | 2025-09-18 01:08:42 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:42.242941 | orchestrator | 2025-09-18 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:45.284315 | orchestrator | 2025-09-18 01:08:45 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:45.284419 | orchestrator | 2025-09-18 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:48.323890 | orchestrator | 2025-09-18 01:08:48 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state STARTED 2025-09-18 01:08:48.324009 | orchestrator | 2025-09-18 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 01:08:51.374627 | orchestrator | 2025-09-18 01:08:51 | INFO  | Task 02e17d20-501b-440c-9192-b1e0c145f9a3 is in state SUCCESS 2025-09-18 01:08:51.376313 | orchestrator | 2025-09-18 01:08:51.376359 | orchestrator | 2025-09-18 01:08:51.376372 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 01:08:51.376384 | orchestrator | 2025-09-18 01:08:51.376396 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 01:08:51.376407 | orchestrator | Thursday 18 September 2025 01:04:02 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-18 01:08:51.376447 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.376467 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:08:51.377107 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:08:51.377125 | orchestrator | 2025-09-18 01:08:51.377137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 01:08:51.377173 | orchestrator | Thursday 18 September 2025 01:04:02 +0000 (0:00:00.291) 0:00:00.554 **** 2025-09-18 01:08:51.377185 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-18 01:08:51.377197 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-18 01:08:51.377207 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-18 01:08:51.378217 | orchestrator | 2025-09-18 01:08:51.378328 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-18 01:08:51.378345 | orchestrator | 2025-09-18 01:08:51.378357 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.378369 | orchestrator | Thursday 18 September 2025 01:04:03 +0000 (0:00:00.424) 0:00:00.979 **** 2025-09-18 01:08:51.378380 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:08:51.378392 | orchestrator | 2025-09-18 01:08:51.378403 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-18 01:08:51.378414 | orchestrator | Thursday 18 September 2025 01:04:03 +0000 (0:00:00.564) 0:00:01.543 **** 2025-09-18 01:08:51.378425 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-18 01:08:51.378436 | orchestrator | 2025-09-18 01:08:51.378447 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-18 01:08:51.378458 | orchestrator | Thursday 18 September 2025 01:04:07 +0000 (0:00:03.345) 0:00:04.888 **** 2025-09-18 01:08:51.378469 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-18 01:08:51.378480 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-18 01:08:51.378491 | orchestrator | 2025-09-18 01:08:51.378502 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-18 01:08:51.378533 | orchestrator | Thursday 18 September 2025 01:04:13 +0000 (0:00:06.602) 0:00:11.491 **** 2025-09-18 01:08:51.378545 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 01:08:51.378557 | orchestrator | 2025-09-18 01:08:51.378568 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-18 01:08:51.378579 | orchestrator | Thursday 18 September 2025 01:04:17 +0000 (0:00:03.874) 0:00:15.365 **** 2025-09-18 01:08:51.378589 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 01:08:51.378600 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-18 01:08:51.378611 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-18 01:08:51.378622 | orchestrator | 2025-09-18 01:08:51.378633 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-18 01:08:51.378644 | orchestrator | Thursday 18 September 2025 01:04:26 +0000 (0:00:08.784) 0:00:24.149 **** 2025-09-18 01:08:51.378654 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 01:08:51.378665 | orchestrator | 2025-09-18 01:08:51.378676 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-18 01:08:51.378687 | orchestrator | Thursday 18 September 2025 01:04:30 +0000 (0:00:03.644) 0:00:27.794 **** 2025-09-18 01:08:51.378698 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-18 01:08:51.378709 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-18 01:08:51.378719 | orchestrator | 2025-09-18 01:08:51.378730 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-18 01:08:51.378741 | orchestrator | Thursday 18 September 2025 01:04:38 +0000 (0:00:07.965) 0:00:35.760 **** 2025-09-18 01:08:51.378752 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-18 01:08:51.378784 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-18 01:08:51.378796 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-18 01:08:51.378806 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-18 01:08:51.378817 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-18 01:08:51.378827 | orchestrator | 2025-09-18 01:08:51.378838 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.378849 | orchestrator | Thursday 18 September 2025 01:04:54 +0000 (0:00:16.878) 0:00:52.638 **** 2025-09-18 01:08:51.378860 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:08:51.378871 | orchestrator | 2025-09-18 01:08:51.378882 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-18 01:08:51.378892 | orchestrator | Thursday 18 September 2025 01:04:55 +0000 (0:00:00.553) 0:00:53.191 **** 2025-09-18 01:08:51.378903 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.378914 | orchestrator | 2025-09-18 01:08:51.378925 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-18 01:08:51.378935 | orchestrator | Thursday 18 September 2025 01:05:00 +0000 (0:00:04.813) 0:00:58.005 **** 2025-09-18 01:08:51.378946 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.378957 | orchestrator | 2025-09-18 01:08:51.378968 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-18 01:08:51.379017 | orchestrator | Thursday 18 September 2025 01:05:04 +0000 (0:00:04.007) 0:01:02.012 **** 2025-09-18 01:08:51.379030 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.379041 | orchestrator | 2025-09-18 01:08:51.379052 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-18 01:08:51.379063 | orchestrator | Thursday 18 September 2025 01:05:07 +0000 (0:00:03.670) 0:01:05.683 **** 2025-09-18 01:08:51.379074 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-18 01:08:51.379085 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-18 01:08:51.379096 | orchestrator | 2025-09-18 01:08:51.379107 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-18 01:08:51.379118 | orchestrator | Thursday 18 September 2025 01:05:19 +0000 (0:00:11.924) 0:01:17.607 **** 2025-09-18 01:08:51.379129 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-18 01:08:51.379140 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-18 01:08:51.379173 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-18 01:08:51.379185 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-18 01:08:51.379197 | orchestrator | 2025-09-18 01:08:51.379208 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-18 01:08:51.379219 | orchestrator | Thursday 18 September 2025 01:05:36 +0000 (0:00:16.721) 0:01:34.329 **** 2025-09-18 01:08:51.379230 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379241 | orchestrator | 2025-09-18 01:08:51.379252 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-18 01:08:51.379262 | orchestrator | Thursday 18 September 2025 01:05:41 +0000 (0:00:05.005) 0:01:39.334 **** 2025-09-18 01:08:51.379273 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379284 | orchestrator | 2025-09-18 01:08:51.379295 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-18 01:08:51.379307 | orchestrator | Thursday 18 September 2025 01:05:47 +0000 (0:00:05.590) 0:01:44.925 **** 2025-09-18 01:08:51.379318 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.379336 | orchestrator | 2025-09-18 01:08:51.379347 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-18 01:08:51.379364 | orchestrator | Thursday 18 September 2025 01:05:47 +0000 (0:00:00.200) 0:01:45.125 **** 2025-09-18 01:08:51.379375 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379386 | orchestrator | 2025-09-18 01:08:51.379397 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.379408 | orchestrator | Thursday 18 September 2025 01:05:52 +0000 (0:00:04.898) 0:01:50.024 **** 2025-09-18 01:08:51.379420 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:08:51.379431 | orchestrator | 2025-09-18 01:08:51.379442 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-18 01:08:51.379453 | orchestrator | Thursday 18 September 2025 01:05:53 +0000 (0:00:00.965) 0:01:50.989 **** 2025-09-18 01:08:51.379463 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379475 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379485 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379496 | orchestrator | 2025-09-18 01:08:51.379507 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-18 01:08:51.379519 | orchestrator | Thursday 18 September 2025 01:05:58 +0000 (0:00:05.717) 0:01:56.707 **** 2025-09-18 01:08:51.379530 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379540 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379551 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379562 | orchestrator | 2025-09-18 01:08:51.379573 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-18 01:08:51.379584 | orchestrator | Thursday 18 September 2025 01:06:03 +0000 (0:00:04.804) 0:02:01.511 **** 2025-09-18 01:08:51.379595 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379606 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379616 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379627 | orchestrator | 2025-09-18 01:08:51.379638 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-18 01:08:51.379649 | orchestrator | Thursday 18 September 2025 01:06:04 +0000 (0:00:00.835) 0:02:02.346 **** 2025-09-18 01:08:51.379660 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:08:51.379671 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.379682 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:08:51.379693 | orchestrator | 2025-09-18 01:08:51.379704 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-18 01:08:51.379715 | orchestrator | Thursday 18 September 2025 01:06:06 +0000 (0:00:02.160) 0:02:04.507 **** 2025-09-18 01:08:51.379726 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379737 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379748 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379759 | orchestrator | 2025-09-18 01:08:51.379769 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-18 01:08:51.379780 | orchestrator | Thursday 18 September 2025 01:06:08 +0000 (0:00:01.337) 0:02:05.844 **** 2025-09-18 01:08:51.379791 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379802 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379813 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379824 | orchestrator | 2025-09-18 01:08:51.379835 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-18 01:08:51.379846 | orchestrator | Thursday 18 September 2025 01:06:09 +0000 (0:00:01.253) 0:02:07.097 **** 2025-09-18 01:08:51.379857 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379868 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379879 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379890 | orchestrator | 2025-09-18 01:08:51.379920 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-18 01:08:51.379932 | orchestrator | Thursday 18 September 2025 01:06:11 +0000 (0:00:02.010) 0:02:09.108 **** 2025-09-18 01:08:51.379950 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.379961 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.379971 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.379982 | orchestrator | 2025-09-18 01:08:51.379993 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-18 01:08:51.380003 | orchestrator | Thursday 18 September 2025 01:06:13 +0000 (0:00:01.654) 0:02:10.763 **** 2025-09-18 01:08:51.380014 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380025 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:08:51.380036 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:08:51.380047 | orchestrator | 2025-09-18 01:08:51.380058 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-18 01:08:51.380068 | orchestrator | Thursday 18 September 2025 01:06:13 +0000 (0:00:00.860) 0:02:11.623 **** 2025-09-18 01:08:51.380079 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:08:51.380089 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:08:51.380100 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380111 | orchestrator | 2025-09-18 01:08:51.380122 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.380132 | orchestrator | Thursday 18 September 2025 01:06:17 +0000 (0:00:03.736) 0:02:15.360 **** 2025-09-18 01:08:51.380161 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:08:51.380173 | orchestrator | 2025-09-18 01:08:51.380184 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-18 01:08:51.380195 | orchestrator | Thursday 18 September 2025 01:06:18 +0000 (0:00:00.495) 0:02:15.855 **** 2025-09-18 01:08:51.380206 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380217 | orchestrator | 2025-09-18 01:08:51.380228 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-18 01:08:51.380238 | orchestrator | Thursday 18 September 2025 01:06:22 +0000 (0:00:04.102) 0:02:19.957 **** 2025-09-18 01:08:51.380249 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380260 | orchestrator | 2025-09-18 01:08:51.380271 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-18 01:08:51.380282 | orchestrator | Thursday 18 September 2025 01:06:25 +0000 (0:00:03.358) 0:02:23.316 **** 2025-09-18 01:08:51.380293 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-18 01:08:51.380304 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-18 01:08:51.380315 | orchestrator | 2025-09-18 01:08:51.380325 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-18 01:08:51.380341 | orchestrator | Thursday 18 September 2025 01:06:33 +0000 (0:00:08.139) 0:02:31.456 **** 2025-09-18 01:08:51.380352 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380363 | orchestrator | 2025-09-18 01:08:51.380374 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-18 01:08:51.380385 | orchestrator | Thursday 18 September 2025 01:06:37 +0000 (0:00:03.541) 0:02:34.997 **** 2025-09-18 01:08:51.380395 | orchestrator | ok: [testbed-node-0] 2025-09-18 01:08:51.380406 | orchestrator | ok: [testbed-node-1] 2025-09-18 01:08:51.380417 | orchestrator | ok: [testbed-node-2] 2025-09-18 01:08:51.380428 | orchestrator | 2025-09-18 01:08:51.380439 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-18 01:08:51.380450 | orchestrator | Thursday 18 September 2025 01:06:37 +0000 (0:00:00.326) 0:02:35.323 **** 2025-09-18 01:08:51.380465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.380504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.380517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.380530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.380547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.380559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.380570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.380713 | orchestrator | 2025-09-18 01:08:51.380725 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-18 01:08:51.380736 | orchestrator | Thursday 18 September 2025 01:06:40 +0000 (0:00:02.607) 0:02:37.931 **** 2025-09-18 01:08:51.380747 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.380758 | orchestrator | 2025-09-18 01:08:51.380785 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-18 01:08:51.380797 | orchestrator | Thursday 18 September 2025 01:06:40 +0000 (0:00:00.128) 0:02:38.059 **** 2025-09-18 01:08:51.380808 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.380819 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:08:51.380830 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:08:51.380845 | orchestrator | 2025-09-18 01:08:51.380856 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-18 01:08:51.380867 | orchestrator | Thursday 18 September 2025 01:06:40 +0000 (0:00:00.457) 0:02:38.517 **** 2025-09-18 01:08:51.380879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.380896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.380908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.380926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.380937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.380949 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.380979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.380992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381044 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:08:51.381055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381209 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:08:51.381220 | orchestrator | 2025-09-18 01:08:51.381232 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.381243 | orchestrator | Thursday 18 September 2025 01:06:41 +0000 (0:00:00.685) 0:02:39.202 **** 2025-09-18 01:08:51.381254 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 01:08:51.381265 | orchestrator | 2025-09-18 01:08:51.381276 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-18 01:08:51.381286 | orchestrator | Thursday 18 September 2025 01:06:42 +0000 (0:00:00.546) 0:02:39.749 **** 2025-09-18 01:08:51.381298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.381329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.381342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.381360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.381377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.381389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.381401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.381533 | orchestrator | 2025-09-18 01:08:51.381544 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-18 01:08:51.381556 | orchestrator | Thursday 18 September 2025 01:06:47 +0000 (0:00:05.190) 0:02:44.939 **** 2025-09-18 01:08:51.381567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381636 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.381654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381723 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:08:51.381734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381808 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:08:51.381819 | orchestrator | 2025-09-18 01:08:51.381831 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-18 01:08:51.381842 | orchestrator | Thursday 18 September 2025 01:06:48 +0000 (0:00:00.840) 0:02:45.780 **** 2025-09-18 01:08:51.381854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.381924 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.381940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.381953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.381964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.381993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.382048 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:08:51.382063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 01:08:51.382080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 01:08:51.382091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.382103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 01:08:51.382114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 01:08:51.382126 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:08:51.382137 | orchestrator | 2025-09-18 01:08:51.382179 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-18 01:08:51.382199 | orchestrator | Thursday 18 September 2025 01:06:48 +0000 (0:00:00.887) 0:02:46.668 **** 2025-09-18 01:08:51.382220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382434 | orchestrator | 2025-09-18 01:08:51.382445 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-18 01:08:51.382457 | orchestrator | Thursday 18 September 2025 01:06:54 +0000 (0:00:05.321) 0:02:51.990 **** 2025-09-18 01:08:51.382468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 01:08:51.382479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 01:08:51.382490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 01:08:51.382501 | orchestrator | 2025-09-18 01:08:51.382517 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-18 01:08:51.382528 | orchestrator | Thursday 18 September 2025 01:06:56 +0000 (0:00:01.990) 0:02:53.980 **** 2025-09-18 01:08:51.382539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.382589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.382630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.382757 | orchestrator | 2025-09-18 01:08:51.382768 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-18 01:08:51.382780 | orchestrator | Thursday 18 September 2025 01:07:11 +0000 (0:00:15.714) 0:03:09.694 **** 2025-09-18 01:08:51.382791 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.382802 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.382813 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.382824 | orchestrator | 2025-09-18 01:08:51.382835 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-18 01:08:51.382846 | orchestrator | Thursday 18 September 2025 01:07:13 +0000 (0:00:01.497) 0:03:11.192 **** 2025-09-18 01:08:51.382857 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.382868 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.382884 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.382896 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.382907 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.382917 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.382928 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.382939 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.382950 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.382960 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 01:08:51.382971 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 01:08:51.382982 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 01:08:51.382993 | orchestrator | 2025-09-18 01:08:51.383003 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-18 01:08:51.383014 | orchestrator | Thursday 18 September 2025 01:07:18 +0000 (0:00:05.290) 0:03:16.482 **** 2025-09-18 01:08:51.383025 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383036 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383047 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383057 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383068 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383079 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383090 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383102 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383113 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383124 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383135 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383165 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383183 | orchestrator | 2025-09-18 01:08:51.383194 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-18 01:08:51.383205 | orchestrator | Thursday 18 September 2025 01:07:24 +0000 (0:00:05.330) 0:03:21.813 **** 2025-09-18 01:08:51.383221 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383232 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383243 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 01:08:51.383254 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383265 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383277 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 01:08:51.383287 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383298 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383309 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 01:08:51.383320 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383330 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383341 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 01:08:51.383352 | orchestrator | 2025-09-18 01:08:51.383363 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-18 01:08:51.383374 | orchestrator | Thursday 18 September 2025 01:07:29 +0000 (0:00:05.196) 0:03:27.009 **** 2025-09-18 01:08:51.383385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.383404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.383417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 01:08:51.383442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.383454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.383465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 01:08:51.383477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 01:08:51.383604 | orchestrator | 2025-09-18 01:08:51.383616 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 01:08:51.383627 | orchestrator | Thursday 18 September 2025 01:07:32 +0000 (0:00:03.701) 0:03:30.710 **** 2025-09-18 01:08:51.383638 | orchestrator | skipping: [testbed-node-0] 2025-09-18 01:08:51.383649 | orchestrator | skipping: [testbed-node-1] 2025-09-18 01:08:51.383666 | orchestrator | skipping: [testbed-node-2] 2025-09-18 01:08:51.383678 | orchestrator | 2025-09-18 01:08:51.383689 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-18 01:08:51.383700 | orchestrator | Thursday 18 September 2025 01:07:33 +0000 (0:00:00.287) 0:03:30.998 **** 2025-09-18 01:08:51.383711 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.383722 | orchestrator | 2025-09-18 01:08:51.383732 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-18 01:08:51.383743 | orchestrator | Thursday 18 September 2025 01:07:35 +0000 (0:00:02.215) 0:03:33.213 **** 2025-09-18 01:08:51.383755 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.383766 | orchestrator | 2025-09-18 01:08:51.383777 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-18 01:08:51.383788 | orchestrator | Thursday 18 September 2025 01:07:37 +0000 (0:00:02.237) 0:03:35.451 **** 2025-09-18 01:08:51.383799 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.383809 | orchestrator | 2025-09-18 01:08:51.383820 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-18 01:08:51.383831 | orchestrator | Thursday 18 September 2025 01:07:39 +0000 (0:00:02.273) 0:03:37.724 **** 2025-09-18 01:08:51.383842 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.383853 | orchestrator | 2025-09-18 01:08:51.383864 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-18 01:08:51.383875 | orchestrator | Thursday 18 September 2025 01:07:42 +0000 (0:00:02.373) 0:03:40.098 **** 2025-09-18 01:08:51.383886 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.383897 | orchestrator | 2025-09-18 01:08:51.383908 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 01:08:51.383919 | orchestrator | Thursday 18 September 2025 01:08:04 +0000 (0:00:22.338) 0:04:02.436 **** 2025-09-18 01:08:51.383930 | orchestrator | 2025-09-18 01:08:51.383940 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 01:08:51.383951 | orchestrator | Thursday 18 September 2025 01:08:04 +0000 (0:00:00.070) 0:04:02.507 **** 2025-09-18 01:08:51.383962 | orchestrator | 2025-09-18 01:08:51.383978 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 01:08:51.383989 | orchestrator | Thursday 18 September 2025 01:08:04 +0000 (0:00:00.065) 0:04:02.572 **** 2025-09-18 01:08:51.384000 | orchestrator | 2025-09-18 01:08:51.384011 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-18 01:08:51.384022 | orchestrator | Thursday 18 September 2025 01:08:04 +0000 (0:00:00.060) 0:04:02.633 **** 2025-09-18 01:08:51.384033 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.384044 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.384055 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.384066 | orchestrator | 2025-09-18 01:08:51.384077 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-18 01:08:51.384088 | orchestrator | Thursday 18 September 2025 01:08:16 +0000 (0:00:11.287) 0:04:13.920 **** 2025-09-18 01:08:51.384099 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.384110 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.384121 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.384132 | orchestrator | 2025-09-18 01:08:51.384193 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-18 01:08:51.384206 | orchestrator | Thursday 18 September 2025 01:08:24 +0000 (0:00:08.118) 0:04:22.039 **** 2025-09-18 01:08:51.384217 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.384227 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.384238 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.384250 | orchestrator | 2025-09-18 01:08:51.384260 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-18 01:08:51.384271 | orchestrator | Thursday 18 September 2025 01:08:34 +0000 (0:00:10.236) 0:04:32.275 **** 2025-09-18 01:08:51.384282 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.384293 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.384312 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.384323 | orchestrator | 2025-09-18 01:08:51.384334 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-18 01:08:51.384345 | orchestrator | Thursday 18 September 2025 01:08:45 +0000 (0:00:10.541) 0:04:42.817 **** 2025-09-18 01:08:51.384356 | orchestrator | changed: [testbed-node-0] 2025-09-18 01:08:51.384366 | orchestrator | changed: [testbed-node-1] 2025-09-18 01:08:51.384376 | orchestrator | changed: [testbed-node-2] 2025-09-18 01:08:51.384386 | orchestrator | 2025-09-18 01:08:51.384395 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 01:08:51.384405 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 01:08:51.384416 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:08:51.384426 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 01:08:51.384435 | orchestrator | 2025-09-18 01:08:51.384445 | orchestrator | 2025-09-18 01:08:51.384455 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 01:08:51.384465 | orchestrator | Thursday 18 September 2025 01:08:50 +0000 (0:00:05.567) 0:04:48.385 **** 2025-09-18 01:08:51.384480 | orchestrator | =============================================================================== 2025-09-18 01:08:51.384490 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.34s 2025-09-18 01:08:51.384500 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.88s 2025-09-18 01:08:51.384509 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.72s 2025-09-18 01:08:51.384519 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.71s 2025-09-18 01:08:51.384529 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.92s 2025-09-18 01:08:51.384538 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.29s 2025-09-18 01:08:51.384548 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.54s 2025-09-18 01:08:51.384558 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.24s 2025-09-18 01:08:51.384567 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.78s 2025-09-18 01:08:51.384577 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.14s 2025-09-18 01:08:51.384586 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.12s 2025-09-18 01:08:51.384610 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.97s 2025-09-18 01:08:51.384621 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.60s 2025-09-18 01:08:51.384641 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.72s 2025-09-18 01:08:51.384651 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.59s 2025-09-18 01:08:51.384660 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.57s 2025-09-18 01:08:51.384670 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.33s 2025-09-18 01:08:51.384680 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.32s 2025-09-18 01:08:51.384690 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.29s 2025-09-18 01:08:51.384699 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.20s 2025-09-18 01:08:51.384709 | orchestrator | 2025-09-18 01:08:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:08:54.415560 | orchestrator | 2025-09-18 01:08:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:08:57.454478 | orchestrator | 2025-09-18 01:08:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:00.494333 | orchestrator | 2025-09-18 01:09:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:03.538619 | orchestrator | 2025-09-18 01:09:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:06.575604 | orchestrator | 2025-09-18 01:09:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:09.619549 | orchestrator | 2025-09-18 01:09:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:12.657624 | orchestrator | 2025-09-18 01:09:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:15.700391 | orchestrator | 2025-09-18 01:09:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:18.739934 | orchestrator | 2025-09-18 01:09:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:21.780830 | orchestrator | 2025-09-18 01:09:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:24.825102 | orchestrator | 2025-09-18 01:09:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:27.864954 | orchestrator | 2025-09-18 01:09:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:30.909349 | orchestrator | 2025-09-18 01:09:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:33.948729 | orchestrator | 2025-09-18 01:09:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:36.987717 | orchestrator | 2025-09-18 01:09:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:40.036377 | orchestrator | 2025-09-18 01:09:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:43.077716 | orchestrator | 2025-09-18 01:09:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:46.117907 | orchestrator | 2025-09-18 01:09:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:49.159640 | orchestrator | 2025-09-18 01:09:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 01:09:52.205778 | orchestrator | 2025-09-18 01:09:52.488585 | orchestrator | 2025-09-18 01:09:52.492678 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Sep 18 01:09:52 UTC 2025 2025-09-18 01:09:52.494358 | orchestrator | 2025-09-18 01:09:52.874132 | orchestrator | ok: Runtime: 0:33:20.217389 2025-09-18 01:09:53.132937 | 2025-09-18 01:09:53.133080 | TASK [Bootstrap services] 2025-09-18 01:09:53.874975 | orchestrator | 2025-09-18 01:09:53.875218 | orchestrator | # BOOTSTRAP 2025-09-18 01:09:53.875245 | orchestrator | 2025-09-18 01:09:53.875260 | orchestrator | + set -e 2025-09-18 01:09:53.875272 | orchestrator | + echo 2025-09-18 01:09:53.875285 | orchestrator | + echo '# BOOTSTRAP' 2025-09-18 01:09:53.875302 | orchestrator | + echo 2025-09-18 01:09:53.875347 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-18 01:09:53.883037 | orchestrator | + set -e 2025-09-18 01:09:53.883064 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-18 01:09:57.769979 | orchestrator | 2025-09-18 01:09:57 | INFO  | It takes a moment until task 597ff288-0846-4013-a7e2-132af2dccfdc (flavor-manager) has been started and output is visible here. 2025-09-18 01:10:01.188786 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-18 01:10:01.188898 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-18 01:10:01.188921 | orchestrator | │ in run │ 2025-09-18 01:10:01.188933 | orchestrator | │ │ 2025-09-18 01:10:01.188943 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-18 01:10:01.188969 | orchestrator | │ 192 │ │ 2025-09-18 01:10:01.188980 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-18 01:10:01.188992 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-18 01:10:01.189003 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-18 01:10:01.189014 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-18 01:10:01.189024 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-18 01:10:01.189035 | orchestrator | │ │ 2025-09-18 01:10:01.189048 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-18 01:10:01.189071 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-18 01:10:01.189082 | orchestrator | │ │ debug = False │ │ 2025-09-18 01:10:01.189093 | orchestrator | │ │ definitions = { │ │ 2025-09-18 01:10:01.189103 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-18 01:10:01.189114 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-18 01:10:01.189124 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-18 01:10:01.189169 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-18 01:10:01.189181 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-18 01:10:01.189191 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-18 01:10:01.189203 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-18 01:10:01.189214 | orchestrator | │ │ │ ], │ │ 2025-09-18 01:10:01.189225 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-18 01:10:01.189235 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189246 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-18 01:10:01.189284 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189296 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 01:10:01.189306 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.189317 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 01:10:01.189328 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.189339 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-18 01:10:01.189349 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-18 01:10:01.189360 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.189371 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.189381 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189392 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-18 01:10:01.189403 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189413 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 01:10:01.189424 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 01:10:01.189435 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 01:10:01.189466 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.189477 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-18 01:10:01.189488 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-18 01:10:01.189499 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.189509 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.189520 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189531 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-18 01:10:01.189547 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189558 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 01:10:01.189569 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.189579 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.189591 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.189601 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-18 01:10:01.189612 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-18 01:10:01.189623 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.189634 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.189644 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189655 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-18 01:10:01.189665 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189685 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 01:10:01.189696 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 01:10:01.189707 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.189718 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.189729 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-18 01:10:01.189740 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-18 01:10:01.189750 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.189761 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.189772 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189782 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-18 01:10:01.189793 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189803 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.189814 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.189825 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.189835 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.189846 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-18 01:10:01.189857 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-18 01:10:01.189867 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.189878 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.189889 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.189900 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-18 01:10:01.189910 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.189926 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.189937 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 01:10:01.189956 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.213394 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.213439 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-18 01:10:01.213451 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-18 01:10:01.213462 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.213472 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.213483 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.213494 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-18 01:10:01.213504 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.213532 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 01:10:01.213543 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.213554 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.213565 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.213575 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-18 01:10:01.213586 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-18 01:10:01.213597 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.213607 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.213618 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.213628 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-18 01:10:01.213639 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.213650 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 01:10:01.213660 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-18 01:10:01.213671 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.213682 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.213692 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-18 01:10:01.213703 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-18 01:10:01.213714 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.213724 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.213735 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.213746 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-18 01:10:01.213757 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 01:10:01.213769 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.213779 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.213790 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.213801 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.213812 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-18 01:10:01.213822 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-18 01:10:01.213833 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.213854 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.213865 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.213876 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-18 01:10:01.213886 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 01:10:01.213903 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.213914 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 01:10:01.213937 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.213949 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.213959 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-18 01:10:01.213970 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-18 01:10:01.213981 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.213991 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.214002 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-18 01:10:01.214013 | orchestrator | │ │ │ ] │ │ 2025-09-18 01:10:01.214065 | orchestrator | │ │ } │ │ 2025-09-18 01:10:01.214077 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-18 01:10:01.214088 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-18 01:10:01.214099 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-18 01:10:01.214110 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-18 01:10:01.214120 | orchestrator | │ │ name = 'local' │ │ 2025-09-18 01:10:01.214163 | orchestrator | │ │ recommended = True │ │ 2025-09-18 01:10:01.214175 | orchestrator | │ │ url = None │ │ 2025-09-18 01:10:01.214187 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-18 01:10:01.214201 | orchestrator | │ │ 2025-09-18 01:10:01.214212 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-18 01:10:01.214223 | orchestrator | │ in __init__ │ 2025-09-18 01:10:01.214233 | orchestrator | │ │ 2025-09-18 01:10:01.214244 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-18 01:10:01.214255 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-18 01:10:01.214265 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-18 01:10:01.214276 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-18 01:10:01.214287 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-18 01:10:01.214298 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-18 01:10:01.214308 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-18 01:10:01.214319 | orchestrator | │ │ 2025-09-18 01:10:01.214337 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-18 01:10:01.214357 | orchestrator | │ │ cloud = │ │ 2025-09-18 01:10:01.214379 | orchestrator | │ │ definitions = { │ │ 2025-09-18 01:10:01.214390 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-18 01:10:01.214401 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-18 01:10:01.214412 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-18 01:10:01.214423 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-18 01:10:01.214434 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-18 01:10:01.214445 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-18 01:10:01.214456 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-18 01:10:01.214467 | orchestrator | │ │ │ ], │ │ 2025-09-18 01:10:01.214478 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-18 01:10:01.214496 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.240699 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-18 01:10:01.240743 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.240754 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 01:10:01.240764 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.240775 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 01:10:01.240786 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.240797 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-18 01:10:01.240807 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-18 01:10:01.240819 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.240829 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.240840 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.240851 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-18 01:10:01.240861 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.240872 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 01:10:01.240882 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 01:10:01.240893 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 01:10:01.240903 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.240913 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-18 01:10:01.240924 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-18 01:10:01.240934 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.240962 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.240973 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.240983 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-18 01:10:01.240994 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241005 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 01:10:01.241015 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.241025 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241036 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241046 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-18 01:10:01.241057 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-18 01:10:01.241067 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241078 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.241099 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.241110 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-18 01:10:01.241120 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241158 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 01:10:01.241169 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 01:10:01.241180 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241190 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241201 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-18 01:10:01.241212 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-18 01:10:01.241222 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241233 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.241255 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.241266 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-18 01:10:01.241277 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241288 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.241298 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.241309 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241320 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241330 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-18 01:10:01.241341 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-18 01:10:01.241351 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241370 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.241380 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.241391 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-18 01:10:01.241402 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241412 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.241423 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 01:10:01.241434 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241445 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241455 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-18 01:10:01.241466 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-18 01:10:01.241476 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241487 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.241497 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.241508 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-18 01:10:01.241518 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241529 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 01:10:01.241539 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.241550 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241562 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241573 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-18 01:10:01.241584 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-18 01:10:01.241594 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241605 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.241616 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.241626 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-18 01:10:01.241637 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 01:10:01.241648 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 01:10:01.241658 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-18 01:10:01.241669 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.241679 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.241690 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-18 01:10:01.241700 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-18 01:10:01.241711 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.241734 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.297238 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.297338 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-18 01:10:01.297381 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 01:10:01.297393 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.297404 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 01:10:01.297415 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.297427 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.297438 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-18 01:10:01.297449 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-18 01:10:01.297460 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.297471 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.297481 | orchestrator | │ │ │ │ { │ │ 2025-09-18 01:10:01.297492 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-18 01:10:01.297503 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 01:10:01.297514 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 01:10:01.297525 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 01:10:01.297536 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 01:10:01.297547 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 01:10:01.297557 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-18 01:10:01.297568 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-18 01:10:01.297579 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 01:10:01.297590 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 01:10:01.297601 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-18 01:10:01.297612 | orchestrator | │ │ │ ] │ │ 2025-09-18 01:10:01.297623 | orchestrator | │ │ } │ │ 2025-09-18 01:10:01.297634 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-18 01:10:01.297645 | orchestrator | │ │ recommended = True │ │ 2025-09-18 01:10:01.297656 | orchestrator | │ │ self = │ │ 2025-09-18 01:10:01.297679 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-18 01:10:01.297695 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-18 01:10:01.297728 | orchestrator | KeyError: 'recommended' 2025-09-18 01:10:01.687995 | orchestrator | ERROR 2025-09-18 01:10:01.688177 | orchestrator | { 2025-09-18 01:10:01.688216 | orchestrator | "delta": "0:00:08.092258", 2025-09-18 01:10:01.688241 | orchestrator | "end": "2025-09-18 01:10:01.573265", 2025-09-18 01:10:01.688262 | orchestrator | "msg": "non-zero return code", 2025-09-18 01:10:01.688282 | orchestrator | "rc": 1, 2025-09-18 01:10:01.688300 | orchestrator | "start": "2025-09-18 01:09:53.481007" 2025-09-18 01:10:01.688318 | orchestrator | } failure 2025-09-18 01:10:01.703769 | 2025-09-18 01:10:01.703850 | PLAY RECAP 2025-09-18 01:10:01.703900 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-18 01:10:01.703924 | 2025-09-18 01:10:01.943211 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-18 01:10:01.944284 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-18 01:10:02.667374 | 2025-09-18 01:10:02.667527 | PLAY [Post output play] 2025-09-18 01:10:02.683185 | 2025-09-18 01:10:02.683309 | LOOP [stage-output : Register sources] 2025-09-18 01:10:02.745791 | 2025-09-18 01:10:02.746018 | TASK [stage-output : Check sudo] 2025-09-18 01:10:03.599212 | orchestrator | sudo: a password is required 2025-09-18 01:10:03.782057 | orchestrator | ok: Runtime: 0:00:00.014236 2025-09-18 01:10:03.796641 | 2025-09-18 01:10:03.796807 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-18 01:10:03.835163 | 2025-09-18 01:10:03.835444 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-18 01:10:03.904432 | orchestrator | ok 2025-09-18 01:10:03.913385 | 2025-09-18 01:10:03.913529 | LOOP [stage-output : Ensure target folders exist] 2025-09-18 01:10:04.352031 | orchestrator | ok: "docs" 2025-09-18 01:10:04.352429 | 2025-09-18 01:10:04.597776 | orchestrator | ok: "artifacts" 2025-09-18 01:10:04.837448 | orchestrator | ok: "logs" 2025-09-18 01:10:04.858310 | 2025-09-18 01:10:04.858473 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-18 01:10:04.897990 | 2025-09-18 01:10:04.898326 | TASK [stage-output : Make all log files readable] 2025-09-18 01:10:05.172850 | orchestrator | ok 2025-09-18 01:10:05.182390 | 2025-09-18 01:10:05.182522 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-18 01:10:05.217246 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:05.232812 | 2025-09-18 01:10:05.232946 | TASK [stage-output : Discover log files for compression] 2025-09-18 01:10:05.257452 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:05.272903 | 2025-09-18 01:10:05.273048 | LOOP [stage-output : Archive everything from logs] 2025-09-18 01:10:05.319309 | 2025-09-18 01:10:05.319473 | PLAY [Post cleanup play] 2025-09-18 01:10:05.328372 | 2025-09-18 01:10:05.328474 | TASK [Set cloud fact (Zuul deployment)] 2025-09-18 01:10:05.385956 | orchestrator | ok 2025-09-18 01:10:05.397315 | 2025-09-18 01:10:05.397432 | TASK [Set cloud fact (local deployment)] 2025-09-18 01:10:05.421626 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:05.437401 | 2025-09-18 01:10:05.437550 | TASK [Clean the cloud environment] 2025-09-18 01:10:05.963317 | orchestrator | 2025-09-18 01:10:05 - clean up servers 2025-09-18 01:10:06.741386 | orchestrator | 2025-09-18 01:10:06 - testbed-manager 2025-09-18 01:10:06.829124 | orchestrator | 2025-09-18 01:10:06 - testbed-node-0 2025-09-18 01:10:06.910107 | orchestrator | 2025-09-18 01:10:06 - testbed-node-5 2025-09-18 01:10:06.996731 | orchestrator | 2025-09-18 01:10:06 - testbed-node-2 2025-09-18 01:10:07.100966 | orchestrator | 2025-09-18 01:10:07 - testbed-node-1 2025-09-18 01:10:07.191053 | orchestrator | 2025-09-18 01:10:07 - testbed-node-3 2025-09-18 01:10:07.286411 | orchestrator | 2025-09-18 01:10:07 - testbed-node-4 2025-09-18 01:10:07.381904 | orchestrator | 2025-09-18 01:10:07 - clean up keypairs 2025-09-18 01:10:07.401750 | orchestrator | 2025-09-18 01:10:07 - testbed 2025-09-18 01:10:07.423169 | orchestrator | 2025-09-18 01:10:07 - wait for servers to be gone 2025-09-18 01:10:16.138305 | orchestrator | 2025-09-18 01:10:16 - clean up ports 2025-09-18 01:10:16.327659 | orchestrator | 2025-09-18 01:10:16 - 01016ac9-d043-4956-806d-fbe89cb7fd86 2025-09-18 01:10:16.560934 | orchestrator | 2025-09-18 01:10:16 - 2bab16af-bf29-4522-bb6f-9a5cada27d70 2025-09-18 01:10:16.845348 | orchestrator | 2025-09-18 01:10:16 - 7142f12c-e772-4255-8da0-bd860ae74171 2025-09-18 01:10:17.090333 | orchestrator | 2025-09-18 01:10:17 - 9308df78-3e4f-4a98-8bb5-88109b79d6b1 2025-09-18 01:10:17.300811 | orchestrator | 2025-09-18 01:10:17 - 998b7301-7e70-4af7-9a2c-2ad278ad77c4 2025-09-18 01:10:17.503261 | orchestrator | 2025-09-18 01:10:17 - de16ad99-6ac4-471c-a8b1-9758402d8581 2025-09-18 01:10:17.936487 | orchestrator | 2025-09-18 01:10:17 - e8ec83f8-808d-46eb-a5b8-1616060b2b53 2025-09-18 01:10:18.169888 | orchestrator | 2025-09-18 01:10:18 - clean up volumes 2025-09-18 01:10:18.273990 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-2-node-base 2025-09-18 01:10:18.311587 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-5-node-base 2025-09-18 01:10:18.349306 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-4-node-base 2025-09-18 01:10:18.390845 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-3-node-base 2025-09-18 01:10:18.440607 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-1-node-base 2025-09-18 01:10:18.484897 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-manager-base 2025-09-18 01:10:18.525002 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-0-node-base 2025-09-18 01:10:18.563800 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-2-node-5 2025-09-18 01:10:18.604707 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-6-node-3 2025-09-18 01:10:18.642790 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-5-node-5 2025-09-18 01:10:18.682169 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-8-node-5 2025-09-18 01:10:18.728894 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-1-node-4 2025-09-18 01:10:18.769613 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-0-node-3 2025-09-18 01:10:18.813003 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-7-node-4 2025-09-18 01:10:18.853966 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-3-node-3 2025-09-18 01:10:18.893339 | orchestrator | 2025-09-18 01:10:18 - testbed-volume-4-node-4 2025-09-18 01:10:18.935826 | orchestrator | 2025-09-18 01:10:18 - disconnect routers 2025-09-18 01:10:19.046667 | orchestrator | 2025-09-18 01:10:19 - testbed 2025-09-18 01:10:20.124098 | orchestrator | 2025-09-18 01:10:20 - clean up subnets 2025-09-18 01:10:20.161467 | orchestrator | 2025-09-18 01:10:20 - subnet-testbed-management 2025-09-18 01:10:20.322939 | orchestrator | 2025-09-18 01:10:20 - clean up networks 2025-09-18 01:10:20.517097 | orchestrator | 2025-09-18 01:10:20 - net-testbed-management 2025-09-18 01:10:20.856622 | orchestrator | 2025-09-18 01:10:20 - clean up security groups 2025-09-18 01:10:20.893956 | orchestrator | 2025-09-18 01:10:20 - testbed-node 2025-09-18 01:10:21.018723 | orchestrator | 2025-09-18 01:10:21 - testbed-management 2025-09-18 01:10:21.137188 | orchestrator | 2025-09-18 01:10:21 - clean up floating ips 2025-09-18 01:10:21.169822 | orchestrator | 2025-09-18 01:10:21 - 81.163.192.51 2025-09-18 01:10:21.556920 | orchestrator | 2025-09-18 01:10:21 - clean up routers 2025-09-18 01:10:21.653390 | orchestrator | 2025-09-18 01:10:21 - testbed 2025-09-18 01:10:22.999366 | orchestrator | ok: Runtime: 0:00:16.804078 2025-09-18 01:10:23.003993 | 2025-09-18 01:10:23.004228 | PLAY RECAP 2025-09-18 01:10:23.004372 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-18 01:10:23.004467 | 2025-09-18 01:10:23.134925 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-18 01:10:23.137298 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-18 01:10:23.847664 | 2025-09-18 01:10:23.847818 | PLAY [Cleanup play] 2025-09-18 01:10:23.863407 | 2025-09-18 01:10:23.863524 | TASK [Set cloud fact (Zuul deployment)] 2025-09-18 01:10:23.930750 | orchestrator | ok 2025-09-18 01:10:23.940099 | 2025-09-18 01:10:23.940254 | TASK [Set cloud fact (local deployment)] 2025-09-18 01:10:23.974562 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:23.991702 | 2025-09-18 01:10:23.991847 | TASK [Clean the cloud environment] 2025-09-18 01:10:25.090312 | orchestrator | 2025-09-18 01:10:25 - clean up servers 2025-09-18 01:10:25.582716 | orchestrator | 2025-09-18 01:10:25 - clean up keypairs 2025-09-18 01:10:25.598469 | orchestrator | 2025-09-18 01:10:25 - wait for servers to be gone 2025-09-18 01:10:25.642271 | orchestrator | 2025-09-18 01:10:25 - clean up ports 2025-09-18 01:10:25.712598 | orchestrator | 2025-09-18 01:10:25 - clean up volumes 2025-09-18 01:10:25.773523 | orchestrator | 2025-09-18 01:10:25 - disconnect routers 2025-09-18 01:10:25.794272 | orchestrator | 2025-09-18 01:10:25 - clean up subnets 2025-09-18 01:10:25.811511 | orchestrator | 2025-09-18 01:10:25 - clean up networks 2025-09-18 01:10:25.973667 | orchestrator | 2025-09-18 01:10:25 - clean up security groups 2025-09-18 01:10:26.005707 | orchestrator | 2025-09-18 01:10:26 - clean up floating ips 2025-09-18 01:10:26.030530 | orchestrator | 2025-09-18 01:10:26 - clean up routers 2025-09-18 01:10:26.529289 | orchestrator | ok: Runtime: 0:00:01.320123 2025-09-18 01:10:26.533049 | 2025-09-18 01:10:26.533196 | PLAY RECAP 2025-09-18 01:10:26.533297 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-18 01:10:26.533348 | 2025-09-18 01:10:26.652504 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-18 01:10:26.655114 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-18 01:10:27.398876 | 2025-09-18 01:10:27.399032 | PLAY [Base post-fetch] 2025-09-18 01:10:27.414557 | 2025-09-18 01:10:27.414704 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-18 01:10:27.470184 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:27.484486 | 2025-09-18 01:10:27.484722 | TASK [fetch-output : Set log path for single node] 2025-09-18 01:10:27.536339 | orchestrator | ok 2025-09-18 01:10:27.544803 | 2025-09-18 01:10:27.544937 | LOOP [fetch-output : Ensure local output dirs] 2025-09-18 01:10:28.014343 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/logs" 2025-09-18 01:10:28.281971 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/artifacts" 2025-09-18 01:10:28.570768 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/521038748b9a4196ba8a7486b5537499/work/docs" 2025-09-18 01:10:28.593737 | 2025-09-18 01:10:28.593908 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-18 01:10:29.511465 | orchestrator | changed: .d..t...... ./ 2025-09-18 01:10:29.511967 | orchestrator | changed: All items complete 2025-09-18 01:10:29.512081 | 2025-09-18 01:10:30.190999 | orchestrator | changed: .d..t...... ./ 2025-09-18 01:10:30.914688 | orchestrator | changed: .d..t...... ./ 2025-09-18 01:10:30.944626 | 2025-09-18 01:10:30.944758 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-18 01:10:30.979733 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:30.982361 | orchestrator | skipping: Conditional result was False 2025-09-18 01:10:31.007864 | 2025-09-18 01:10:31.007995 | PLAY RECAP 2025-09-18 01:10:31.008086 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-18 01:10:31.008132 | 2025-09-18 01:10:31.138649 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-18 01:10:31.141113 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-18 01:10:31.879941 | 2025-09-18 01:10:31.880105 | PLAY [Base post] 2025-09-18 01:10:31.894430 | 2025-09-18 01:10:31.894561 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-18 01:10:32.817726 | orchestrator | changed 2025-09-18 01:10:32.827885 | 2025-09-18 01:10:32.828004 | PLAY RECAP 2025-09-18 01:10:32.828069 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-18 01:10:32.828201 | 2025-09-18 01:10:32.950463 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-18 01:10:32.952915 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-18 01:10:33.723262 | 2025-09-18 01:10:33.723424 | PLAY [Base post-logs] 2025-09-18 01:10:33.733884 | 2025-09-18 01:10:33.734011 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-18 01:10:34.185689 | localhost | changed 2025-09-18 01:10:34.195716 | 2025-09-18 01:10:34.195859 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-18 01:10:34.232081 | localhost | ok 2025-09-18 01:10:34.236010 | 2025-09-18 01:10:34.236129 | TASK [Set zuul-log-path fact] 2025-09-18 01:10:34.251479 | localhost | ok 2025-09-18 01:10:34.259352 | 2025-09-18 01:10:34.259458 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-18 01:10:34.284361 | localhost | ok 2025-09-18 01:10:34.288187 | 2025-09-18 01:10:34.288309 | TASK [upload-logs : Create log directories] 2025-09-18 01:10:34.781558 | localhost | changed 2025-09-18 01:10:34.784432 | 2025-09-18 01:10:34.784534 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-18 01:10:35.296394 | localhost -> localhost | ok: Runtime: 0:00:00.006945 2025-09-18 01:10:35.305940 | 2025-09-18 01:10:35.306121 | TASK [upload-logs : Upload logs to log server] 2025-09-18 01:10:35.874512 | localhost | Output suppressed because no_log was given 2025-09-18 01:10:35.878721 | 2025-09-18 01:10:35.878919 | LOOP [upload-logs : Compress console log and json output] 2025-09-18 01:10:35.929633 | localhost | skipping: Conditional result was False 2025-09-18 01:10:35.945017 | localhost | skipping: Conditional result was False 2025-09-18 01:10:35.949246 | 2025-09-18 01:10:35.949377 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-18 01:10:35.995046 | localhost | skipping: Conditional result was False 2025-09-18 01:10:35.995702 | 2025-09-18 01:10:35.999090 | localhost | skipping: Conditional result was False 2025-09-18 01:10:36.012768 | 2025-09-18 01:10:36.014288 | LOOP [upload-logs : Upload console log and json output]